Tag Archives: Uncategorized

We’re excited to introduce you to the Microsoft Modern Keyboard with Fingerprint ID, a premium quality keyboard that brings the convenience and security of Windows Hello fingerprint sign-in to any PC running Windows 10, and the Microsoft Modern Mouse, a sleek simple complement to the keyboard.

Windows Hello* helps enable people to move to a password alternative that gives you a fast, convenient and exceptionally secure way to unlock your Windows devices. Studies show more than 80 percent of people use the same password across multiple web sites, managing around 20-30 different accounts. We want to make sure that everyone running Windows 10 can experience the beautiful relief that comes from letting go of your written Pa55w0Rd5! So we worked to deliver a predictable, intent-driven and simple solution for someone to quickly and securely log into their PC, or authenticate an action.

With the new Microsoft Modern Keyboard with Fingerprint ID, you can use your finger to sign into your Windows devices, and compatible apps**, with Windows Hello in less than 2 seconds – that’s 3 times faster*** than a password that you have to remember and type in.

Design and keyboard specs

On top of the convenience of effortless login, the Microsoft Modern Keyboard with Fingerprint ID gives you the flexibility of working wirelessly or through direct wired connection, as it features both a Bluetooth 4.0 and hardwire connection.

The keyboard’s sleek, low-profile design and crafted aluminum top case bring an element of sophistication to any well-thought-out desk space. With 2mm key travel, the typing experience has been carefully crafted to maximize accuracy and typing efficiency. Unlike other solutions in the market today, the new Windows Hello enterprise-grade secure fingerprint reader has been subtly and beautifully designed as a regular key.

Sold separately, you can also complete your desktop solution with the sleek, simple Microsoft Modern Mouse. As a visual complement to the keyboard, this mouse lets you set up your workspace to be both beautiful and practical.

The raised arc of the Microsoft Modern Mouse supports your palm in a natural resting position, and reduces the tension on your risk, to keep your hand relaxed as you point and click. The metal scroll wheel feels solid under your finger, and you don’t have to worry about wires thanks to Wireless Bluetooth, plus it has up to 12 months of battery life.

*Windows Hello requires specialized hardware, including fingerprint reader, illuminated IR sensor or other biometric sensors and capable devices.**Limited to compatible apps.***Based on average time comparison between typing a password respectively detecting a face or fingerprint to authentication success****Device must support Bluetooth 4.0 or higher. Many Apple and Windows 7 computers use older versions.*****In the U.S.; warranty terms vary by market.

Today, we are happy to announce a new preview release of ChakraCore, based on Node.js 8, available for you to try on Windows, macOS, and Linux.

We started our Node-ChakraCore journey with a focus on extending the reach of Node.js to a new platform, Windows 10 IoT Core. From the beginning, it’s been clear that in addition to growing the reach of Node.js ecosystem, there’s a need to address real problems facing developers and the Node.js ecosystem though innovation, openness and community collaboration.

As we continue our journey to bring fresh new ideas and enable the community to imagine new scenarios, we want to take a moment to reflect on some key milestones we’ve achieved in the last year.

Full cross-platform support

While ChakraCore was born on Windows, we’ve always aspired to make it cross-platform. At NodeSummit 2016, we announced experimental support for the Node-ChakraCore interpreter and runtime on Linux and macOS.

In the year since that announcement, we’ve brought support for full JIT compilation and concurrent and partial GC on x64 to both macOS and Ubuntu Linux 14.04 and higher. This has been a massive undertaking that brings Node-ChakraCore features to parity across all major desktop operating systems. We are actively working on cross-platform internationalization to complete this support.

Support for Node.js API (N-API)

This year, our team was part of the community effort to design and develop the next-generation Node.js API (N-API) in Node.js 8 which is fully supported in ChakraCore. N-API is a stable Node API layer for native modules, which provides ABI compatibility guarantees across different Node versions & flavors. This allows N-API-enabled native modules to just work across different versions and flavors of Node.js, without recompilations.

According to some estimates, 30% of the module ecosystem gets impacted every time there is a new Node.js release, due to lack of ABI stability. This causes friction in Node.js upgrades in production deployments and adds cost to native module maintainers in having to maintain several supported versions for their module.

Node.js on iOS

We are always delighted to see the community build and extend Node-ChakraCore in novel and interesting ways. Janea Systems recently announced their experimental port of Node.js to run on iOS, powered by Node-ChakraCore. This takes Node.js to iOS for the first time, expanding the reach of the Node.js ecosystem to an entire new category of devices.

Node.js on iOS would not be possible without Node-ChakraCore. Because of the JITing restrictions on iOS, stock Node.js cannot run. However, Node-ChakraCore can be built to use the interpreter only, with the JIT completely turned off.

This is particularly useful for scenarios like offline-first mobile apps designed with the expectation of unreliability connectivity or limited bandwidth. These apps primarily rely on local cache on the device, and use store and forward techniques to opportunistically use data connectivity when available. These kinds of apps are common in scenarios like large factory floors, remote oil rigs, disaster zones, and more.

Time-Travel Debugging

This year also brought the debut of Time-Travel debugging with Node-ChakraCore on all the supported platforms, as originally demoed using VSCode at NodeSummit 2016. This innovation directly helps with the biggest pain-point developers have with Node.js – debugging! With this release, Time-Travel Debugging has improved in stability and functionality since its introduction, and is also available with Node-ChakraCore on Linux and macOS.

We recently started measuring module compatibility using CITGM modules, and have improved compatibility with a wide variety of modules. Popular node modules like, node-sass, express and body-parser are considering using Node-ChakraCore in their CI system to ensure ongoing compatibility. Node-ChakraCore also has improved 15% in ACMEAir performance on Linux in the last 2 months, and we’ve identified areas to make further improvements in the near future.

With our initial priority of full cross-platform support behind us, we are moving our focus to new priorities, including performance and module compatibility. These are our primary focus for the immediate future, and we look forward to sharing progress with the community as it happens!

Get involved

As with any open source project, community participation is the key to the health of Node-ChakraCore. We could not have come this far in our journey without the help of everyone who is active on our github repo, and in the broader Node community, for their reviews and guidance. We are humbled by your enthusiasm and wish to thank you for everything you do. We will be counting on your continued support as we make progress in our journey together.

For those who are looking to get involved outside of directly contributing code, there are several ways to get involved and advance the Node-ChakraCore project. If you are a …

Node.js module maintainer – Try testing your module with Node-ChakraCore. Use these instructions to add Node-ChakraCore in your own CI to ensure ongoing compatibility. If you run into issues, please let us know at our repo or our gitter channel.

Native module maintainer – Consider porting your module to N-API. This will help insulate your module from breakage due to new Node releases and will also work with Node-ChakraCore.

As always, we are eager to hear your feedback, so please keep them coming. Find us on twitter @ChakraCore, our gitter channel or you can open an issue on our github repo to start a conversation.

Beginning in the Windows 10 Fall Creators Update, we intend to disable VBScript execution in IE 11 for websites in the Internet Zone and the Restricted Sites Zone by default, to provide a more secure experience. This change was initially announced in a blog post in April. The new default behavior can be previewed beginning with today’s Windows Insider Preview release, build 16237.

For customers on previous versions of Windows, we intend to include this change in future cumulative security updates for Internet Explorer 11.The settings to enable, disable, or prompt for VBScript execution in Internet Explorer 11 will remain configurable per site security zone, via Registry, or via Group Policy, on released versions of Windows. We will continue to post updates here in advance of these changes to default settings for VBScript execution in Internet Explorer 11.

To provide feedback on this change, or to report any issues resulting from this change in Windows Insider Preview, you can use the Feedback Hub app on any Windows 10 device. Your feedback goes directly to our engineers to help make Windows even better.

Today we are excited to be releasing a new build from our Development Branch! Windows 10 Insider Preview Build 16170 for PC has been released to Windows Insiders in the Fast ring. As we mentioned earlier this week, you won’t see many big noticeable changes or new features in new builds just yet. That’s because right now, we’re focused on making some refinements to OneCore and doing some code refactoring and other engineering work that is necessary to make sure OneCore is optimally structured for teams to start checking in code. This also means more bugs and other issues that could be slightly more painful to live with – so check your Windows Insider Program settings!

Windows Insider Program for Business is here!

We have one other exciting announcement about a program we co-created with our IT Professional Windows Insiders.

Back in mid-February at Microsoft Ignite in Australia, Bill Karagounis showcased our commitment to an important segment of the Windows Insider program – IT Professionals. As Bill stated, we’re incredibly honored to have IT Pros participating in the Windows Insider Program and to be evaluating Windows 10 and its features as part of their deployment process.

Since his announcement, we’ve continued to receive an overwhelming response from IT Professionals interested in helping us shape the future of the program with features specifically for business. One of the most frequent requests we received from Insiders was for the option to join the Windows Insider Program using corporate credentials (instead of the existing registration process which requires a personal Microsoft Account):

“I’m currently in the Windows Insider Program and would love to be able to test more business-oriented features internally. It would also be great to be able to recruit a few users to run Insider Builds, as well, using the corporate credentials. If there were mechanisms in place for me to see those users’ feedback and issues, that would be great, as well.” – Current Windows Insider at US-based Company

“I want more users in key areas to be able to test/evaluate/learn/feedback. Microsoft accounts are not allowed. We are using SCCM current release and want to establish steps before ‘release ready’ and ‘business ready’.” – Current Windows Insider at UK-based Company

“Due to the rapid release of Windows we need a different channel to where IT Pros can provide feedback to the Dev teams.” – Current Windows Insider at an Australian-based Company

Based on feedback like this, we’re excited to announce today that Insiders can now register for Windows 10 Insider Preview Builds on their PC using their corporate credentials in Azure Active Directory.

Using corporate credentials will enable you to increase the visibility of your organization’s feedback – especially on features that support productivity and business needs. You’ll also be able to better advocate for the needs of your organization, and have real-time dialogue with Microsoft on features critical to specific business needs. This dialogue, in turn, helps us identify trends in issues organizations are facing when deploying Windows 10 and deliver solutions to you more quickly.
We’ll be rolling out even more tools aimed at better supporting IT Professionals and business users in our Insider community. Stay tuned!

How to access the Windows Insider Program for Business features

Simply visit the Windows Insider Program site and click on the “For Business” tab. To access the new features, you must register using your corporate account in Azure Active Directory (AAD). This account is the same account that you use for Office 365 and other Microsoft services.

Once you’ve registered using your corporate credentials, you’ll find a set of resources that will help you get started with the Windows Insider Program for Business in your organization.

Go to Settings Updates & Security Windows Insider Program. (Make sure that you have administrator rights to your machine and that it has latest Windows updates.)

Click Get Started, enter your corporate credentials that you used to register, then follow the on-screen directions.

Windows Insider for Business participants partner with the Windows Development Team to discover and create features, infuse innovation, and plan for what’s around the bend. We’ve architected some great features together, received amazing feedback, and we’re not done!

In addition, the Windows Insider Program connects you to a global community of IT Pros in our new Microsoft Tech Community and helps provide you with the information and experience you need to grow not only your skills but your career as well. You’ll be hearing a LOT more from us in the coming months.

Keep the feedback coming!

Other changes, improvements, and fixes for PC

We fixed the issue causing your PC to fail to install new builds on reboot with the error 8024a112.

We have updated the share icon in File Explorer (in the Share tab) to match our new share iconography.

We fixed an issue where Cortana Reminders was displayed as a possible share target when Cortana wasn’t enabled.

We fixed an issue where Miracast sessions would disconnect a minute or so after the Connect UI was closed if the connection was a first time pairing.

We fixed a high-DPI issue when “System (Enhanced)” scaling is enabled so as to now correctly display certain applications that use display graphics accelerated contents.

Turning the night light schedule off in Settings now turns night light off immediately.

Known issues for PC

Narrator will not work on this build. If you require Narrator to work, you should move to the Slow ring until we get this bug fixed.

Some Insiders have reported seeing this error “Some updates were cancelled. We’ll keep trying in case new updates become available” in Windows Update. See this forum post for more details.

Some apps and games may crash due to a misconfiguration of advertising ID that happened in a prior build. Specifically, this issue affects new user accounts that were created on Build 15031. The misconfiguration can continue to persist after upgrading to later builds. The ACL on the registry key incorrectly denies access to the user and you can delete the following registry key to get out of this state: HKCUSoftwareMicrosoftWindowsCurrentVersionAdvertisingInfo.

There is a bug where if you need to restart your PC due to a pending update like with the latest Surface firmware updates, the restart reminder dialog doesn’t pop up. You should check Settings > Update & security > Windows Update to see if a restart is required.

Certain hardware configurations may cause the broadcast live review window in the Game bar to flash Green while you are Broadcasting. This does not affect the quality of your broadcast and is only visible to the Broadcaster. Make sure you have the latest graphics drivers.

Double-clicking on the Windows Defender icon in the notification area does not open Windows Defender. Right-clicking on the icon and choosing open will open Windows Defender.

Surface 3 devices fail to update to new builds if a SD memory card is inserted. The updated drivers for the Surface 3 that fix this issue have not yet been published to Windows Update.

Pressing F12 to open the Developer Tools in Microsoft Edge while F12 is open and focused may not return focus to the tab F12 is opened against, and vice-versa.

The Action Center may get into a state where dismissing one notification unexpectedly dismisses multiple. If this happens, please try rebooting your device.

It’s that time again! We’re getting ready to start releasing new builds from our Development Branch. And just like before after the release of a new Windows 10 update, you won’t see many big noticeable changes or new features in new builds just yet. That’s because right now, we’re focused on making some refinements to OneCore and doing some code refactoring and other engineering work that is necessary to make sure OneCore is optimally structured for teams to start checking in code. Now comes our standard warning that these new builds from our Development Branch may include more bugs and other issues that could be slightly more painful for some people to live with. So, if this makes you uncomfortable, you can change your ring by going to Settings > Update & security > Windows Insider Program and moving to the Slow or Release Preview rings for more stable builds.

Additionally, if you are an Windows Insider who wants to stay on the Windows 10 Creators Update – you will need to go to Settings > Update & security > Windows Insider Program and press the “Stop Insider Preview builds” button.

A menu will pop up and you will need to choose “Keep giving me builds until the next Windows release”. This will keep you on the Windows 10 Creators Update.

Today, we’re thrilled to announce a partnership with BrowserStack, a leader in mobile and web testing, to provide remote virtual testing on Microsoft Edgefor free. Until now, developers who need to test against a specific version of Microsoft Edge have been limited to local virtual machines, or PCs with Windows 10 installed. However, there are many developers that don’t have easy access to Microsoft Edge for testing purposes.

BrowserStack Live Testing can run Microsoft Edge inside your browser on macOS, Windows, or Linux.

Today, we are excited to partner with BrowserStack, which provides the industry’s fastest testing on physical devices and browsers, so that you can focus on delivering customers the best version of your product or website. BrowserStack is trusted by developers at over 36,000 companies, including Microsoft, to help make the testing process faster and more accessible. Under this new partnership, developers will be able to sign into BrowserStack and test Microsoft Edge using their Live and Automate services for free.

Live testing provides a remote, cloud-based instance of Microsoft Edge streamed over the web. You can interact with the cloud-based browser just as you would an installed browser, within your local browser on any platform – whether it’s macOS, Linux, or older versions of Windows.

As testing setups are becoming more automated, we are excited to also offer BrowserStack’s Automate testing service under this partnership, for free. This method of testing allows you to run up to 10 Microsoft Edge test sessions via script, which can integrate with your local test runners via the standardized WebDriver API. You can even configure your machine so that the cloud-based browser can see your local development environment—see the Local Testing instructions at BrowserStack to learn more.

Testing Microsoft Edge in BrowserStack using WebDriver automation

To ensure you can test against all possible versions of Microsoft Edge that your users may be using, BrowserStack will be providing three versions of Microsoft Edge for testing: the two most recent “Stable” channel releases, and the most recent “Preview” release (via the Windows Insider Preview Fast ring).

BrowserStack currently serves more than 36,000 companies globally including Microsoft, AirBnB, and MasterCard. In addition to Microsoft Edge, the service provides more than 1100 combinations of operating systems and browsers and its Real Device Cloud allows anyone, anywhere to test their website on a physical Android or iOS device. With data centers located around the world, BrowserStack is trusted by over 1.6 million developers relying upon the service as it provides the fastest and most accurate testing on physical devices.

We’re very excited to partner with BrowserStack to make this testing service free for Microsoft Edge. Head over to BrowserStack and sign up to get started testing your site in Microsoft Edge today.

I have had a problem for a little while now – the problem is that on my personal laptop I want to use:

Visual Studio with the Windows Phone Emulator

The Hololens emulator

Windows Containers

Linux Containers (through Docker for Windows)

My virtual machines

However, all of these solutions keep on tripping over each other. Specifically, they keep on tripping over each other when it comes to networking configuration. I have spent the last couple of months complaining to the various teams involved in this – and I finally have it all working! Yay!

There were three key things that came together to make this all work:

Improved guidance around Container and VM networking

The networking team has been doing a great job of updating the NAT and Container networking documentation. If you read these documents a couple of months ago – I would highly recommend you revisit them as there is a ton of new information in there.

New installation experience for Windows Containers on Windows 10

Another thing that has changed in the last couple months is the process for getting Windows Containers up and running on Windows 10. Specifically – we now utilize Docker for Windows to get you up and running. Not only does this make it much easier to get things setup – it means you get very clear error messages when things go wrong. In my case – I received this handy error message:

Learning about XDECleanup

It turns out that my problem was that I had a stale network configuration from the Windows Phone emulator. Handily, the Windows Phone Emulator team ship a tool to help out here – XDECleanup. If you open an administrative command prompt and run “C:Program Files (x86)Microsoft XDE<version>XdeCleanup.exe” – it will delete and recreate all networking associated with the Windows Phone Emulator.

For me – updating my container setup to use Docker for Windows combined with running XDECleanup finally got me to a world where all my virtualization based development tools happily work side by side.

A practical walkthrough, in six steps

This basic example demonstrates NGINX and swarm mode in action, to provide the foundation for you to apply these concepts to your own configurations.

This document walks through several steps for setting up a containerized NGINX server and using it to load balance traffic across a swarm cluster. For clarity, these steps are designed as an end-to-end tutorial for setting up a three node cluster and running two docker services on that cluster; by completing this exercise, you will become familiar with the general workflow required to use swarm mode and to load balance across Windows Container endpoints using an NGINX load balancer.

The basic setup

This exercise requires three container hosts–two of which will be joined to form a two-node swarm cluster, and one which will be used to host a containerized NGINX load balancer. In order to demonstrate the load balancer in action, two docker services will be deployed to the swarm cluster, and the NGINX server will be configured to load balance across the container instances that define those services. The services will both be web services, hosting simple content that can be viewed via web browser. With this setup, the load balancer will be easy to see in action, as traffic is routed between the two services each time the web browser view displaying their content is refreshed.

The figure below provides a visualization of this three-node setup. Two of the nodes, the “Swarm Manager” node and the “Swarm Worker” node together form a two-node swarm mode cluster, running two Docker web services, “S1” and “S2”. A third node (the “NGINX Host” in the figure) is used to host a containerized NGINX load balancer, and the load balancer is configured to route traffic across the container endpoints for the two container services. This figure includes example IP addresses and port numbers for the two swarm hosts and for each of the six container endpoints running on the hosts.

System requirements

Three* or more computer systems running Windows 10 Creators Update (available today for members of the Windows Insiders program), setup as a container host (see the topic, Windows Containers on Windows 10 for more details on how to get started with Docker containers on Windows 10).

Additionally, each host system should be configured with the following:

Open ports: Swarm mode requires that the following ports be available on each host.

TCP port 2377 for cluster management communications

TCP and UDP port 7946 for communication among nodes

TCP and UDP port 4789 for overlay network traffic

*Note on using two nodes rather than three:These instructions can be completed using just two nodes. However, currently there is a known bug on Windows which prevents containers from accessing their hosts using localhost or even the host’s external IP address (for more background on this, see Caveats and Gotchas below). This means that in order to access docker services via their exposed ports on the swarm hosts, the NGINX load balancer must not reside on the same host as any of the service container instances.Put another way, if you use only two nodes to complete this exercise, one of them will need to be dedicated to hosting the NGINX load balancer, leaving the other to be used as a swarm container host (i.e. you will have a single-host swarm cluster and a host dedicated to your containerized NGINX load balancer).

Step 1: Build an NGINX container image

In this step, we’ll build the container image required for your containerized NGINX load balancer. Later we will run this image on the host that you have designated as your NGINX container host.

Note: To avoid having to transfer your container image later, complete the instructions in this section on the container host that you intend to use for your NGINX load balancer.

NGINX is available for download from nginx.org. An NGINX container image can be built using a simple Dockerfile that installs NGINX onto a Windows base container image and configures the container to run as an NGINX executable. For the purpose of this exercise, I’ve made a Dockerfile downloadable from my personal GitHub repo; access the NGINX Dockerfile here, then save it to some location (e.g. C:tempnginx) on your NGINX container host machine. From that location, build the image using the following command:

C:tempnginx> docker build -t nginx .

Now the image should appear with the rest of the docker images on your system (you can check this using the docker images command).

(Optional) Confirm that your NGINX image is ready

First, run the container:

C:temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:temp> docker exec <CONTAINERID> ipconfig

For example, your container’s IP address may be 172.17.176.155, as in the example output shown below.

Next, open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that NGINX is successfully running in your container.

Step 2: Build images for two containerized IIS Web services

In this step, we’ll build container images for two simple IIS-based web applications. Later, we’ll use these images to create two docker services.

Note: Complete the instructions in this section on one of the container hosts that you intend to use as a swarm host.

Build a generic IIS Web Server image

On my personal GitHub repo, I have made a Dockerfile available for creating an IIS Web server image. The Dockerfile simply enables the Internet Information Services (IIS) Web server role within a microsoft/windowsservercore container. Download the Dockerfile from here, and save it to some location (e.g. C:tempiis) on one of the host machines that you plan to use as a swarm node. From that location, build the image using the following command:

C:tempiis> docker build -t iis-web .

(Optional) Confirm that your IIS Web server image is ready

First, run the container:

C:temp> docker run -it -p 80:80 iis-web

Next, use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID>in the next command.

Get the container’s IP address:

C:temp> docker exec<CONTAINERID> ipconfig

Now open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that the IIS Web server role is successfully running in your container.

Build two custom IIS Web server images

In this step, we’ll be replacing the IIS landing/confirmation page that we saw above with custom HTML pages–two different pages, corresponding to two different web container images. In a later step, we’ll be using our NGINX container to load balance across instances of these two images. Because the images will be different, we will easily see the load balancing in action as it shifts between the content being served by the containers instances of the two images.

You have now created images for two unique web services; if you view the Docker images on your host by running docker images, you should see that you have two new container images—“web_1” and “web_2”.

Put the IIS container images on all of your swarm hosts

To complete this exercise you will need the custom web container images that you just created to be on all of the host machines that you intend to use as swarm nodes. There are two ways for you to get the images onto additional machines:

Option 1:Repeat the steps above to build the “web_1” and “web_2” containers on your second host.

Step 3: Join your hosts to a swarm

As a result of the previous steps, one of your host machines should have the nginx container image, and the rest of your hosts should have the Web server images, “web_1” and “web_2”. In this step, we’ll join the latter hosts to a swarm cluster.

Note: The host running the containerized NGINX load balancer cannot run on the same host as any container endpoints for which it is performing load balancing; the host with your nginx container image must be reserved for load balancing only. For more background on this, see Caveats and Gotchas below.

First, run the following command from any machine that you intend to use as a swarm host. The machine that you use to execute this command will become a manager node for your swarm cluster.

Replace <HOSTIPADDRESS> with the public IP address of your host machine

Now run the following command from each of the other host machines that you intend to use as swarm nodes, joining them to the swarm as a worker nodes.

Replace <MANAGERIPADDRESS> with the public IP address of your host machine (i.e. the value of <HOSTIPADDRESS> that you used to initialize the swarm from the manager node)

Replace <WORKERJOINTOKEN> with the worker join-token provided as output by the docker swarm init command (you can also obtain the join-token by running docker swarm join-token worker from the manager host)

Your nodes are now configured to form a swarm cluster! You can see the status of the nodes by running the following command from your manage node:

C:temp> docker node ls

Step 4: Deploy services to your swarm

Note: Before moving on, stop and remove any NGINX or IIS containers running on your hosts. This will help avoid port conflicts when you define services. To do this, simply run the following commands for each container, replacing <CONTAINERID> with the ID of the container you are stopping/removing:

C:temp> docker stop <CONTAINERID>
C:temp> docker rm <CONTAINERID>

Next, we’re going to use the “web_1” and “web_2” container images that we created in previous steps of this exercise to deploy two container services to our swarm cluster.

To create the services, run the following commands from your swarm manager node:

You should now have two services running, s1 and s2. You can view their status by running the following command from your swarm manager node:

C: > docker service ls

Additionally, you can view information on the container instances that define a specific service with the following commands (where <SERVICENAME> is replaced with the name of the service you are inspecting (for example, s1 or s2):

Step 5: Configure your NGINX load balancer

Now that services are running on your swarm, you can configure the NGINX load balancer to distribute traffic across the container instances for those services.

Of course, generally load balancers are used to balance traffic across instances of a single service, not multiple services. For the purpose of clarity, this example uses two services so that the function of the load balancer can be easily seen; because the two services are serving different HTML content, we’ll clearly see how the load balancer is distributing requests between them.

The nginx.conf file

First, the nginx.conf file for your load balancer must be configured with the IP addresses and service ports of your swarm nodes and services. An example nginx.conf file was included with the NGINX download that was used to create your nginx container image in Step 1. For the purpose of this exercise, I copied and adapted the example file provided by NGINX and used it to create a simple template for you to adapt with your specific node/container information.

Download the nginx.conf file template that I prepared for this exercise from my personal GitHub repo, and save it onto your NGINX container host machine. In this step, we’ll adapt the template file and use it to replace the default nginx.conf file that was originally downloaded onto your NGINX container image.

You will need to adjust the file by adding the information for your hosts and container instances. The template nginx.conf file provided contains the following section:

To adapt the file for your configuration, you will need to adjust the <HOSTIP>:<HOSTPORT> entries in the template config file. You will have an entry for each container endpoint that defines your web services. For any given container endpoint, the value of <HOSTIP> will be the IP address of the container host upon which that container is running. The value of <HOSTPORT> will be the port on the container host upon which the container endpoint has been published.

When the services, s1 and s2, were defined in the previous step of this exercise, the --publish mode=host,target=80 parameter was included. This parameter specified that the container instances for the services should be exposed via published ports on the container hosts. More specifically, by including --publish mode=host,target=80 in the service definitions, each service was configured to be exposed on port 80 of each of its container endpoints, as well as a set of automatically defined ports on the swarm hosts (i.e. one port for each container running on a given host).

First, identify the host IPs and published ports for your container endpoints

Before you can adjust your nginx.conf file, you must obtain the required information for the container endpoints that define your services. To do this, run the following commands (again, run these from your swarm manager node):

C: > docker service ps s1
C: > docker service ps s2

The above commands will return details on every container instance running for each of your services, across all of your swarm hosts.

One column of the output, the “ports” column, includes port information for each host of the form *:<HOSTPORT>->80/tcp. The values of <HOSTPORT> will be different for each container instance, as each container is published on its own host port.

Another column, the “node” column, will tell you which machine the container is running on. This is how you will identify the host IP information for each endpoint.

You now have the port information and node for each container endpoint. Next, use that information to populate the upstream field of your nginx.conf file; for each endpoint, add a server to the upstream field of the file, replacing the field with the IP address of each node (if you don’t have this, run ipconfig on each swarm host machine to obtain it), and the field with the corresponding host port.

For example, if you have two swarm hosts (IP addresses 172.17.0.10 and 172.17.0.11), each running three containers your list of servers will end up looking something like this:

Once you have changed your nginx.conf file, save it. Next, we’ll copy it from your host to the NGINX container image itself.

Replace the default nginx.conf file with your adjusted file

If your nginx container is not already running on its host, run it now:

C:temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:temp> docker exec <CONTAINERID> ipconfig

With the container running, use the following command to replace the default nginx.conf file with the file that you just configured (run the following command from the directory in which you saved your adjusted version of the nginx.conf on the host machine):

C:temp> docker cp nginx.conf <CONTAINERID>:C:nginxnginx-1.10.3conf

Now use the following command to reload the NGINX server running within your container:

C:temp> docker exec <CONTAINERID> nginx.exe -s reload

Step 6: See your load balancer in action

Your load balancer should now be fully configured to distribute traffic across the various instances of your swarm services. To see it in action, open a browser and

If accessing from the NGINX host machine: Type the IP address of the nginx container running on the machine into the browser address bar. (This is the value of <CONTAINERID> above).

If accessing from another host machine (with network access to the NGINX host machine): Type the IP address of the NGINX host machine into the browser address bar.

Once you’ve typed the applicable address into the browser address bar, press enter and wait for the web page to load. Once it loads, you should see one of the HTML pages that you created in step 2.

Now press refresh on the page. You may need to refresh more than once, but after just a few times you should see the other HTML page that you created in step 2.

If you continue refreshing, you will see the two different HTML pages that you used to define the services, web_1 and web_2, being accessed in a round-robin pattern (round-robin is the default load balancing strategy for NGINX, but there are others). The animated image below demonstrated the behavior that you should see.

As a reminder, below is the full configuration with all three nodes. When you’re refreshing your web page view, you’re repeatedly accessing the NGINX node, which is distributing your GET request to the container endpoints running on the swarm nodes. Each time you resend the request, the load balancer has the opportunity to route you to a different endpoint, resulting in your being served a different web page, depending on whether your request was routed to an S1 or S2 endpoint.

Caveats and gotchas

Q: Is there a way to publish a single port for my service, so that I can load balance across each of my services rather than each of the individual endpoints for my services?

Unfortunately, we do not yet support publishing a single port for a service on Windows. This feature is swarm mode’s routing mesh feature—a feature that allows you to publish ports for a service, so that that service is accessible to external resources via that port on every swarm node.

Routing mesh for swarm mode on Windows is not yet supported, but will be coming soon.

Q: Why can’t I run my containerized load balancer on one of my swarm nodes?

Currently, there is a known bug on Windows, which prevents containers from accessing their hosts using localhost or even the host’s external IP address. This means containers cannot access their host’s exposed ports—the can only access exposed ports on other hosts.

In the context of this exercise, this means that the NGINX load balancer must be running on its own host, and never on the same host as any services that it needs to access via exposed ports. Put another way, for the containerized NGINX load balancer to balance across the two web services defined in this exercise, s1 and s2, it cannot be running on a swarm node—if it were running on a swarm node, it would be unable to access any containers on that node via host exposed ports.

Of course, an additional caveat here is that containers do not need to be accessed via host exposed ports. It is also possible to access containers directly, using the container IP and published port. If this instead were done for this exercise, the NGINX load balancer would need to be configured to:

Access containers that share its host by their container IP and port

Access containers that do not share its host by their host’s IP and exposed port

There is no problem with configuring the load balancer in this way, other than the added complexity that it introduces compared to simply putting the load balancer on its own machine, so that containers can be uniformly accessed via their host’s IPs and exposed ports.

At the end of last week we published version 5.0 of the Hypervisor Top Level Functional Specification. This version details the state of the hypervisor in Windows Server 2016. You can download it from here:

This is a minor update to correct the RPMs for a kernel ABI change in Red Hat Enterprise Linux, CentOS, and Oracle Linux’s Red Hat Compatible Kernel version 7.3. Version 3.10.0-514.10.2.el7 of the kernel was sufficiently different for symbol conflicts to break the LIS kernel modules and create a situation where a VM would not start correctly. This version of the modules is compatible with the new kernel.