https://blog.soft-cor.com/https://blog.soft-cor.com/favicon.pngGareth Emslie's Bloghttps://blog.soft-cor.com/Ghost 2.10Sat, 12 Jan 2019 14:32:27 GMT60I recently had to migrate off of a managed Wordpress blogging platform, after some research I discovered Ghost. I didn't want to use a shared platform as I wanted more control of my blog, I also didn't want to build from source or spend a large amount of effort on]]>https://blog.soft-cor.com/how-i-am-running-ghost-on-azure/5bf12b964d0afa0001efdc80Mon, 07 Jan 2019 07:49:01 GMTI recently had to migrate off of a managed Wordpress blogging platform, after some research I discovered Ghost. I didn't want to use a shared platform as I wanted more control of my blog, I also didn't want to build from source or spend a large amount of effort on keeping my blog up to date. This is the solution which I came up with and am documenting it so I could replicate it if needed in the future.

Ghost Docker Image

The Ghost team provide a number of official Docker images which can be used to run a Ghost instance. If you dont know about Docker you must be living under a rock :) Docker is one of the technologies which provides various tools which can be used to package and run containerized applications. If you are wondering whether Docker(and Containers) has legs in the Microsoft eco-system - The Windows team has invested significant effort introducing the core constructs needed within their OS to support running containers on the Windows platform.

Windows Server 2016 LTSC at launch also included a license to run an enterprise version of Docker at no additional cost and is this scenario is fully supported by Microsoft! Something to also keep in mind when running your workloads on-prem is that it is reccomended that you leverage the latest Semi-Annual Channel release for Windows Server as this release has the latest container bits.

So how can we run Ghost on a Docker Host? Well assuming the Docker tools are already installed on Windows you would need to ensure you first switch to Linux Container mode. Once in this mode you can execute the following command from a prompt.

At this point in time this is an Experimental feature therefore we need to enable this option in our Docker Tools for Windows settings.

Now our Docker command would look something like the following

$ docker run --platform linux -d -p 3001:2368 ghost:2

We can also mount a directory on the Docker Host as a volume in our running container for our Ghost content(Database, Images etc), we can mount this volume at the same location where Ghost stores content by default i.e. "/var/lib/ghost/content". This requires no configuration changes, the other option is to mount it elsewhere and set the database location using Environment Variables.

Ok now we are armed with the basic knowledge for running a instance of Ghost on our Docker host lets move on!

Monitoring Ghost with Azure App Insights

App Insights is an awesome tool for instrumenting and monitoring your applications. The Official Ghost codebase does not include App Insight support out of the box, I also did not want to fork the codebase and maintain my own version of the code and Docker image. Therefore I came up with an alternate solution to achieve what I want by leveraging the fact that Docker images are layered.

I use the Ghost V2 alpine image as a base image

I install the application insights npm dependency

As the code is all JavaScript I can insert a single line of JS which bootstraps App Insights at application startup.

I want to rebuild my Docker image every time the base image changes, this will ensure that my custom image always has the latest 2.X version of Ghost. I could execute the Docker build and push manually on my Docker host but instead I want to automate this process using Azure Container Registry and a Container Build Task. Using the Azure CLI you can execute the following command to create the build task on an existing ACR registry.

The image will also be rebuilt every time a change is detected in the base image or the GitHub Repro - this is exactly what I want here, I also use the same Docker image tag which will mean there will be no manual interaction for switching to the new image when we are hosting on Azure App Service for example.

Hosting our custom image with Azure Web Apps for Containers

Azure App Service is an extremely useful PaaS offering which allows us to host our web applications on a managed platform(Windows or Linux).

There are various application deployment options on Azure App Service the one which I have chosen is Containers.

In my case I am using Azure App Service for Linux as I want to run Linux based container images but there is also support for Windows containers on Azure App Service for Windows.

We have the option to either run the Ghost image ghost:2-alpine directly from the public Docker Hub registry or as in my case I will run my custom image from my private Azure Container Registry.

When running the Ghost Docker image you can easily inject your configuration using Environment Variables, this can be achieved by adding the relevant configuration items to the Azure App Service application settings UI in the Azure Portal.

As you can see I am providing the following configuration:

Azure App Insights Instrumentation Key

Content Path

Blog Url

Disabling Ghost Update check

Database path and file name

Ensure we are running in production mode(Default)

Storing Ghost data on Azure Storage(Blobs/Files)

I tried running using the WEBSITES_ENABLE_APP_SERVICE_STORAGE option which tells Azure App Service to share the /home/ directory across running instances and ensures this storage is persisted across container restarts. Unfortunately I had number of issues running Ghost in this configuration
therefore I decided to use Azure App Service's preview support for mounting Azure Storage Account(Blobs/Files) as a volume for our running containers.

I also had issues when I attempted to put all data on either File Storage or all data on Blob storage this was specifically related to the sqlite database file and how locking works and how sym linking works for the content files. I didnt investigate these issues further but came up with the following working configuration :

Azure Blob Storage for the sqlite database

Azure File Storage for content(Images, Plugins, Logs, Themes etc)

The following commands will add the relevant volumes from the Azure CLI.

The Azure team have also recently added the ability to manage mounted storage via the Azure App Service application settings UI in the Azure portal.

We can see the mount points which we have added previously via the Azure CLI, In the previous section we show which configuration items must be set to ensure these new mount points are used by Ghost.

Simple Ghost backup with Azure Functions

To ensure some continuity after outages or data loss I built a simple Azure Function which I schedule on a weekly basis which triggers a backup of the Ghost data. It's very simple as it leverages built-in functionality in Ghost aswell as in Azure Storage. The code performs the following:

Export sqlite database to JSON using the built-in API "/ghost/api/v0.1/db/backup?client_id=[CLIENT ID]&client_secret=[CLIENT SECRET]", the exported file is placed within the data directory in the content root. You can use the client id and secret for the "Ghost Backup" client. You can get these values by opening your sqlite database and browsing the clients table data as shown below.

Using the Azure Storage API's I create a Read-Only snapshot of the Azure File Share where I am storing my Ghost data.

We will now be able to restore our blog in the event of any data loss, this concludes the steps which I performed to setup and run my private Ghost blog on Azure. Feel free to provide your feedback or questions via the comments section below.

Issues

Error: SQLITE_CORRUPT: database disk image is malformed

After I had my blog up and running for a few days I noticed the site became unavailable and was logging this exception to the application logs. I did some research and it appears this can happen if the DB file is not closed cleanly - sqliteexception.

I downloaded the ghost.db file from my Azure Storage account and opened it in DB Browser SQLite tool.

I then ran the following SQL command to verify db integrity

PRAGMA integrity_check;

The db integrity seems to be intact, therefore I tried the next step which is typically reccomended to fix this type of issue. Which is to export entire db to sql and re-import into the database - File -> Export -> Database to SQL file.

Then Re-import to a new database with File -> Import - Database from SQL file

Replace the original db file with the newly created one.

]]>If you are using the PowerShell task together with Invoke-WebRequest or Invoke-RestMethod during your build/release on Azure DevOps or TFS you may not be getting the performance you expect.

We recently experienced such an issue where downloading larger files(~40Mb) during the execution of a build on TFS was

]]>https://blog.soft-cor.com/performance-issues-when-using-powershell-to-download-files-during-tfs-pipeline-execution/5bf6750fd665f80001e3992cThu, 22 Nov 2018 17:30:00 GMTIf you are using the PowerShell task together with Invoke-WebRequest or Invoke-RestMethod during your build/release on Azure DevOps or TFS you may not be getting the performance you expect.

We recently experienced such an issue where downloading larger files(~40Mb) during the execution of a build on TFS was taking much longer than expected.

To isolate the issue I setup a Test build and added a single PowerShell Task which executed the following inline PowerShell.

What I saw was that it was taking over 3 minutes to download this 40Mb file, which seemed completely unreasonable as this was over the local network.

I then ran the same PowerShell locally in the Powershell Integrated Scripting Environment(ISE) and saw that I was getting dramatically lower response times ~50 seconds, but this still seemed a little high! What was odd is that I could reproduce similar response times ~1.5 minutes locally if I ran the script in the PowerShell console or from command-line using powershell.exe -command(also used internally by the PowerShell build task). It seems that ISE is running the Progress UI updates on a background thread or something which improves the execution performance.

So how can we improve the performance of Invoke-WebRequest and other similar cmdlets? The difference between these cmdlets and others is that they produce some Progress UI during runtime, it seems that this behavior introduces performance issues. After some bingling I found a number of reports of performance issues with these cmdlets.

Based on these findings I changed my inline Powershell to the following.

I re-ran my test build and immediately saw a huge improvement! It took just under 2 seconds to download exactly the same file. This improvement was also across all the scenarios described above :)

Therefore if you are utilizing these cmdlets heavily during your build or release pipelines, or in fact for any non-interactive execution of PowerShell scripts - I would reccomend setting the following option during the execution of your PowerShell scripts.

$progressPreference = 'silentlyContinue'

Hope this helps!

]]>I am currently in the process of migrating to Ghost from Wordpress, I successfully deployed Ghost as a container on top of Azure Web App for Containers but still need to import my old posts. Expect more to come in the future.]]>https://blog.soft-cor.com/migrating-to-ghost/5bd56af034c7a300011621c3Sun, 28 Oct 2018 07:55:17 GMTI am currently in the process of migrating to Ghost from Wordpress, I successfully deployed Ghost as a container on top of Azure Web App for Containers but still need to import my old posts. Expect more to come in the future.]]>In IIS we can impose resource limits on the Application Pool hosting our Web Applications, in this scenario we are typically hosting in-process i.e. ASP .NET, but do these limits still apply when we host a ASP .NET Core 2.0 application?

CPU Limit

Memory Limit

Currently we host

]]>https://blog.soft-cor.com/out-of-process-hosting-of-asp-net-core-with-iis-do-apppool-resource-limits-apply/5be49ed18219dd000170c7c5Tue, 01 May 2018 10:39:00 GMTIn IIS we can impose resource limits on the Application Pool hosting our Web Applications, in this scenario we are typically hosting in-process i.e. ASP .NET, but do these limits still apply when we host a ASP .NET Core 2.0 application?

CPU Limit

Memory Limit

Currently we host our ASP .NET Core applications out-of-process using a reverse proxy and kestrel therefore it wasn't clear to me whether these limits would be enforced. So I wanted to take a closer look...

IIS leverages JobObjects an OS feature which allows us to manage multiple processes as a single Unit, as well as allowing us to impose limits on resource usage for these processes. In Windows 7, Windows Server 2008 R2 and below JobObjects could not be nested and we ran into issues with other out-of-process hosting solutions like the FastCGI module which also leveraged JobObjects internally to ensure that FastCGI processes are terminated when the worker process terminates. If we look at the implementation of the ASP .NET Core Module which we use for hosting ASP .NET Core Apps in IIS we can see what looks like a very similar usage of JobObjects.

It turns out that Microsoft added support for nested JobObjects in Windows 8 & Windows Server 2012 and above therefore we don't run into the same issues which we experienced with Windows 7 and Server 2008 r2 and below. Its simple enough for us to test, I setup a simple ASP .NET Core application and deployed it to a Windows Server 2016 machine on Azure. Before applying any limits we can take a look at the w3wp child process tree on the server with Process Explorer(If no worker process currently running issue at least one http request against your web application), we can see that there are 2 child processes under our w3wp worker process, we also see that there are Job limits applied to the dotnet.exe process which is hosting our .net core application - In this case it should be killed if the parent Job is closed.

To see if CPU limits are applied lets set a limit of 10% on the app pool, open IIS Manager->Application Pools->App Pool->Advanced Settings on the server. Leave the rest as default as we only want to inspect the Job limits for the process using Process Explorer.

After making the changes we can inspect the w3wp child process tree again in Process Explorer, what we can see is that the w3wp process now has a job tab which lists the Job limits for its nested Jobs, these limits are applied to this Job as well as all nested Jobs.

]]>I was recently updating an old website, I decided to move the old ASP .NET Web Forms code base over to latest and greatest ASP .NET Core 2.0 and Razor Pages. It wasn't a large code base but it did contain some Flash and Silverlight, which given declining client]]>https://blog.soft-cor.com/azure-container-instances-rock/5bd623e18a340e0001dafe84Wed, 07 Mar 2018 20:39:00 GMTI was recently updating an old website, I decided to move the old ASP .NET Web Forms code base over to latest and greatest ASP .NET Core 2.0 and Razor Pages. It wasn't a large code base but it did contain some Flash and Silverlight, which given declining client support I decided to replace with equivalent HTML5 and JavaScript. The experience was so enjoyable I decided to blog about it!

Migrate Code base

My first step was to get a local copy of the source code for the site, the code was not even in source control... naughty naughty, luckily it was not a pre-compiled web application and the VB .NET... yes you read it correctly VB .NET :) source code was still available. As there was not a lot of server-side code involved I decided to port it over to C#, .NET Core supports VB .NET but I'm not sure about support for the language in ASP .NET Core see this github issue #2738 for some interesting history. Anyway I see more and more C# samples out there and it makes my life easier to have everything in 1 language. I wanted to be able to deploy this web application to any Azure PaaS which supported Containers or even some other cloud so I decided to build a Docker Image, this was literally 1 checkbox checked during File->New Project in Visual Studio 2017 You will also find a Dockerfile in the root of your ASP.NET Core project, it defines a multistage docker build, We will use this file later when we build our solution in VSTS/TFS. The final image generated by the build command will only contain the release binaries & output for your web application.

To ensure I didn't disrupt future traffic I leveraged the ASP .NET Core URL Rewriting Middleware to redirect all traffic destined for the old .aspx pages to the new shiny Razor Pages. Ok we are now ready to move onto the next step and build our solution!

Continuous Integration with VSTS

I wanted to simplify making any future changes to my solution therefore I chose to push my code up to Visual Studio Team Services, I will also use the build automation functionality provided by VSTS to setup a CI build. With the multi-stage dockerfile our lives are made a lot easier as we bootstrap the build on the Build Agent using a docker image which contains the .NET Core SDK. This running "build" container is then used to build our solution and finally to produce the final Docker Image - we then push our Docker Image to a private Docker registry on Azure Container Registry. In addition to the build task setup above we also set our trigger for this Build definition to enable CI as seen below. For more details how how you can build container images with VSTS/TFS check out the following resource - https://docs.microsoft.com/en-us/vsts/build-release/apps/containers/build?tabs=web

The login command will output further steps to the console which must be performed in your browser, once authenticated we can move on to deploying your container. Create a resource group which will contain your Container Group

$ az group create --name MyWebApplicationRG--location westeurope

Finally we can deploy our container instance, specifying the docker image to deploy as well as the required resources for the instance.

This command will return a JSON result which will contain interesting information about your Container Group, you will see things like the public IP Address & FQDN which point to your running container, the container will be listening for traffic on TCP port 80. All I had to do was point my DNS to this new IP and I was done. For more information how deploying a simple Azure Container Instance check out the following resource - https://docs.microsoft.com/en-us/azure/container-instances/container-instances-tutorial-deploy-app

Conclusion

At this point we still have a manual deployment step but its fairly straight forward to automate it using Release Management, but as you can see we went from legacy app to a modern one in literally 60 minutes, for sure this was a simple scenario but there are a lot of single function enterprise apps out there which could benefit a bit of modernization!

Tip

Check out Virtual Kubelet Project this is a cool piece of code contributed to the community by Microsoft which allows us to add capacity to a Kubernetes cluster without adding additional nodes to the cluster, this could be a stop gap while you are provisioning new resources. The real cool thing is that you can extend your cluster with Azure Container Instances!

]]>I’m loving everything related to containers at the moment, in Azure we have a number of ways that we can deploy our container work loads:

]]>https://blog.soft-cor.com/deploying-kubernetes-with-azure/5be49e9d8219dd000170c7c1Tue, 27 Feb 2018 07:04:00 GMTI’m loving everything related to containers at the moment, in Azure we have a number of ways that we can deploy our container work loads:

Each brings with it its own benefits and challenges some of which are listed below:

Container Instances & App Services are great for low complexity single container deployments, using these services developers need not be concerned about creating & managing a cluster.

AKS(PREVIEW) is a Kubernetes specific solution which is great for building highly available complex container deployments, AKS clusters require minimal maintenance due to built-in support for auto-scaling, auto-patching, auto-updates etc.

ACS supports deploying multiple Orchestrators and is also great for complex container workloads but the burden of maintaining your cluster falls mainly on you.

ServiceFabric is a battle tested distributed system which now also has the ability to orchestrate container workloads along side its native programming models i.e. Reliable Services and Reliable Actors.

In this post I will demonstrate the various ways of deploying a Kubernetes cluster on Azure, Kubernetes is a popular container orchestration solution. A container orchestrator allows us to amongst other things automate deployments, scale our workloads and monitor our deployments. There are multiple ways we can deploy Kubernetes on Azure:

The login command will output further steps to the console which must be performed in your browser, once authenticated we can move on to creating our cluster.

ACS

First step is to create a new Azure Resource Group which will contain all the resources which will be provisioned for our cluster, we need to provide a name and the region where we would like to create it.

$ az group create --name geacsclusterrg --location westeurope

In the output from the command we should see "ProvisioningState Succeeded" We can now provision the cluster - To do this we need to provide a number of arguments including the Resource Group name created in the previous step, Cluster Name, the SSH key which we will use to authenticate with the head node and admin credentials for each node in the cluster.

The command above will create a 3 node cluster, 1 head node(Linux) and 2 agents(Windows), during the course of the deployment it will also generate an SSH KeyPair (this will be located in [User Profile]\.ssh\) these keys will be used to manage our cluster. Once the command completes it will output a json result and again we should see "provisioningState Succeeded" if the deployment was successful. If you login to the Azure Portal and open the ResourceGroup which you created you will see all the platform specific resources which were deployed to create your ACS Cluster.

Connecting to our Cluster

To manage our cluster we need the kubectl tool we can download the binary manually using the instructions detailed in the kubernetes docs or we can use the Azure CLI using the following command.

$ az acs kubernetes install-cli

On linux this command will place the kubectl binary somewhere on your PATH but on Windows you may want to use the following switch --install-location "%ProgramFiles(x86)%\kubernetes\kubectl.exe" to specify a download location. You would then need to add the "%ProgramFiles(x86)%\kubernetes" folder to the PATH. To connect to our cluster we need to download our kubeconfig & test the connection by getting a list of nodes in the cluster we can do this using the following commands.

The command above will create a 2 node cluster(Linux) with a managed head node, during the course of the deployment it will also generate an SSH KeyPair (this will be located in [User Profile]\.ssh\) these keys will be used to manage our cluster. Once the command completes it will output a json result and we should see "provisioningState Succeeded". If you login to the Azure Portal and open the ResourceGroup which you created you will see all the platform specific resources which were deployed to create your ACS Cluster. What is interesting is that in actual fact we see two new Resource Groups in our Azure subscription, the first being the one we created which contains a "Container Service" resource. In addition to the Resource Group above we get another which was created during the deployment and contains all the resources Compute, Storage Network etc for our worker nodes.

Connecting to our Cluster

The steps to connect to our cluster are very similar to ACS, first we need the kubectl tool. We can download the binary manually using the instructions detailed in the kubernetes docs or we can use the Azure CLI using the following command.

$ az aks install-cli

On linux this command will place the kubectl binary somewhere on your PATH but on Windows you may want to use the following switch --install-location "%ProgramFiles(x86)%\kubernetes\kubectl.exe" to specify a download location. You would then need to add the "%ProgramFiles(x86)%\kubernetes" folder to the PATH. To connect to our cluster we need to download our kubeconfig & test the connection by getting a list of nodes in the cluster we can do this using the following commands.

Note that in the case of AKS we only deploy the agents, the Head nodes are managed for us by the platform. Kubernetes also comes with a great web dashboard - when deploying on AKS the dashboard is included automatically, we can access the dashboard over a SSH tunnel.

$ az aks browse --resource-group=geacsclusterrg --name=myclustername

We can now open a browser and navigate to http://127.0.0.1:8001/ui/(If the browser window is blank make sure to include a trailing forward slash at the end of the url)

ACS-Engine

ACS-Engine allows us to define complex container deployments for Azure we describe them as JSON and the tool then converts this JSON to a set of ARM Templates which can be deployed to Azure. I chose to build acs-engine from source and in my case I used the Windows Subsystem for Linux & Ubuntu, but you can also download pre-compiled binaries - for more details see - https://github.com/Azure/acs-engine/blob/master/docs/acsengine.md Before we continue we need to:

Next we can create a cluster definition - The JSON below will create a hybrid Windows/Linux ACS cluster using Kubernetes, replace the values for keyData and ServicePrincipal with the values you created above.

In the JSON output look for the property MASTERFQDN for example in this case mine would be gemycluster.westeurope.cloudapp.azure.com , we will use this in the next section. If you login to the Azure Portal and open the ResourceGroup which you created you will see all the platform specific resources which were deployed to create your ACS Cluster.

Connecting to our Cluster

If you haven't already downloaded the binary you can do so manually using the instructions detailed in the kubernetes docs making sure to match the version used for the cluster. Can also be achieved through the Azure CLI and the --client-version switch. Its best to place the kubectl binary somewhere on your PATH on Windows you may want to place it in location like_ "%ProgramFiles(x86)%\kubernetes\kubectl.exe" and then you_ need to add the "%ProgramFiles(x86)%\kubernetes" folder to your PATH environment variable. To connect to our cluster we also need to download our kubeconfig & test the connection by getting a list of nodes in the cluster we can do this using the following commands. First we need to download the kubeconfig from the newly created master node on linux you can execute the following commands.

ACS-Engine's deployment of kubernetes also includes a great web dashboard we can access the dashboard over a SSH tunnel.

$ kubectl proxy

We can now open a browser and navigate to http://127.0.0.1:8001/ui/(If the browser window is blank make sure to include a trailing forward slash at the end of the url)

Summary

There you go we have successfully deployed our first kubernetes cluster on Azure! As we saw there are multiple strategies for deploying our container workloads to Azure, its up to you which you choose. Once we successfully completed the steps above we should be looking at the Kubernetes dashboard displayed below. In the next post we will look at deploying an application to our cluster.

]]>Although the following scenario works keep in mind that it is not necessarily supported by Microsoft, use the information at own risk. It’s important to note that official Support for Visual Studio 2008 ends April 10th 2018.

Recently I spent time looking into this scenario, a colleague pointed me

]]>https://blog.soft-cor.com/windows-embedded-compact-7-development-on-windows-10/5be49f378219dd000170c7cdSun, 30 Apr 2017 14:34:00 GMTAlthough the following scenario works keep in mind that it is not necessarily supported by Microsoft, use the information at own risk. It’s important to note that official Support for Visual Studio 2008 ends April 10th 2018.

Recently I spent time looking into this scenario, a colleague pointed me to a post on the MSDN forums where users had got a WEC 7 image to boot with Hyper-V on Windows 10.

So I thought I would try it and after finally getting it to work I decided to document it so I can reference this article in the future. My goal was to run my WEC 7 images in the latest and greatest Windows Hypervisor i.e. Hyper-V. Keep in mind Virtual PC works fine in a Windows 7 Hyper-V guest VM and up until now this is how I have done my development & testing.

The high level steps I performed were as follows:

Setup Hyper-V Networking.

Create NAT Network & Switch.

Install & Configure DHCP server.

Setup the Developer Tools.

Install Visual Studio 2008 with SP 1

Install Windows Embedded Compact 7

Create Simple OS Design.

Create BSP clone based on Virtual PC BSP

Tweak for use with Hyper-V

Configure Network

Create OS Design project

Add Hyper-v Guest for WEC 7

Create Virtual Machine

Run VM & Download OS Image

So here it goes these are the detailed steps I performed to get my Embedded Compact Developer environment migrated to Windows 10

Requirements

Windows 10 Creators Update –1703

Visual Studio 2008 Pro SP1

Windows Embedded Compact 7 with Jan 2016 Update

DHCP for Windows

1. Setup Hyper-V Networking

Network Address Translation allows us define a private IP address range for our VM’s, whereby traffic will be re-routed through our host machines network interface, this together with a DCHP server allows us to allocated individual private IP’s to our VM’s and also allows them access to the public internet. NAT is not strictly necessary here but this is my preferred way of configuring networking for my Virtual Machines.

Initial NAT Network support was added in the Anniversary Update release of Windows 10, although there were some restrictions on its usage see the WinNAT capabilities and limitations blog post – in the Creators Update release the windows team vastly improved NAT support by amongst other things adding support for multiple NAT networks.

1.1 Create NAT Network & Add Switch

From the windows Start Menu, search for PowerShell right click and select “Run as Administrator”

Once the PowerShell console opens run the following commands.

1. Create a new NAT network with the name “VMNAT” and a large private IP address range 10.10.0.0/17

2. Create a new Internal Virtual Switch

3. Remove the default IP address configuration for the new Switch

4. Get the interface index for the Network Adaptor assigned to the new Switch

5. Create a new IP Address within the NAT network range created above

We are now ready to proceed to adding a local DHCP server, we could also skip the “Add Local DHCP” step and assign static IP’s to each VM without configuring a DHCP server.

1.2 Add Local DHCP

For DHCP I am using the simple tool DHCP Server for Windows, I wanted a simple and lightweight way of installing & configuring a local DHCP server and this tool meets those needs.

1. Download and extract the latest release of DHCP for Windows, I extract it somewhere central like the Program Files folder.

2. Right click and run the “dhcpwiz.exe” tool as Administrator to start the setup wizard, click next.

3. The available network cards will be displayed, select the “vEthernet (Virtual Machines NAT)” network which we created earlier & click next.

6. click “Advanced” - Add your favorite public DNS servers, in this case I am using Google’s, and set your default gateway to your NIC IP.

7. click “Write INI file” this will persist the settings you have configured through the wizard & click next.

8. There are two ways to run the DHCP server, I prefer to install it as a Windows Service. Click Service –> Install and then Service –> Start.
If you have your firewall enabled click Configure under the Firewall Exceptions section.

2. Setup the Developer Tools.

In this guide I install the tooling on the same machine but because I like to keep only the latest version of Visual Studio installed
on my machine my preference is to install the developer tooling in a guest VM(could be Windows 7 or Windows 10 guest).

To install Windows Embedded Compact 7 you must meet the following requirements.

2.1 Install Visual Studio 2008

Based on the requirements we need to install Visual Studio 2008 with SP1, and it needs to be the Professional or higher SKU.

1. Run the Visuals Studio 2008 installer, click Install Visual Studio

2. For simplicity I have selected the Default installation option

3. Wait for Installation to complete & click next.*note* you will be notified about various compatibility issues by the installer.

2.2 Install Windows Embedded Compact 7

4. To reduce the installation footprint I select custom install and clicked next.

5. Select the following components for installation & click next:

Platform Builder

Silverlight Tools

EN Documentation

Shared Source

x86 Architecture

6. Leave default not to create offline layout, click next

7. Accept the EULA and click next.

8. Wait for installation to complete & click next.

9. Click finish to close the installer.

10. This scenario has been tested with January 2016 Update, at a minimum we should be using this version of the
product see Microsoft downloads to download the update. Once downloaded install the update.

3. Create OS Design

Running a WEC 7 image in Hyper-V requires specific changes to be made to the Virtual PC BSP, rather than make these changes on this built-in BSP. I prefer to clone it and customize the cloned BSP – My goal here is to create a minimum viable OS Design to run in Hyper-V. For detailed steps for creating an OS Design see the following blog post series over at Embedded101.com.

4.2 Run VM & Download OS Image

We are now ready to run our virtual machines and download our newly compiled OS Image.

1. With Visual Studio 2008 SP1 & our OS design open, which we built in the previous section

2. Right click the VM in Hyper-V Manager and click Connect, with the VM Console open click Start, on boot our VM will
start to send out BOOTMET messages.

3. In Visual Studio, Make sure we have (auto) Ether device selected in toolbar & click Attach Device; the following window will be shown.
It may take a moment for the device to be detected, select the device from the list and click apply.

4. The image will now be downloaded to the device.

5. Progress will be shown in VS2008 & in the VM Console Window.

6. Once download completes WEC will boot

There you go so now you can enjoy Windows 10 and still develop for Embedded Compact 7, of course your mileage may vary – There are no guarantees regarding this scenario and although it may work now there is no way to know if it will continue to work on future releases of Hyper-V/Windows.

Let me know your thoughts & issues.

Troubleshooting Tips

tftp timeout

I am using the stsadm property "peoplepicker-searchadcustomfilter" to set a custom search filter but the people picker returns no results, even though I am sure that the user exists. I am using a Custom Active Directory Attribute in my filter is there anything special I have to

I am using the stsadm property "peoplepicker-searchadcustomfilter" to set a custom search filter but the people picker returns no results, even though I am sure that the user exists. I am using a Custom Active Directory Attribute in my filter is there anything special I have to do to get it to work?

Solution :

Yes, When the people picker performs a search it queries the Domain Controller using a LDAP Global Catalog Search Request. This means that when you create the new custom attributes you need to ensure that they are included in the Global Catalog, you can do this by checking the (2)"Replicate to the Global Catalog" option in the Attribute properties. If thisoption is not checked you will not be able to reference this attribute in your People Picker filter. Another tip which will speed up your queries is to make sure that you check the (1)"Index this attribute" option.

Remember that the custom filter will apply to PeoplePicker for the entire Web Application.