nsquared blog

Tuesday, January 8, 2019

In the last couple
of months I have been back in training mode and helping people understand more
about containers and how to manage them. The focus has been on Azure's container
offerings, although a lot of the work is very platform agnostic, and that is
really the point of containers.

For those that have
never used Azure before, or need a refresher on some of the great things you
can do, in both, the Azure Portal, and from the command line, here are couple of
short modules in the new Microsoft Learn site:

I encourage everyone
that uses Azure to learn how to use the command line tools. For the longest
time I avoided the CLI as I like the way the Portal allows you to discover new
features through exploration, which has always been the advantage of a graphical
user interface over the command line. Yet the strength of learning the command
line tools is how fast you can get tasks accomplished as well as the ability to
automate tasks with scripts.

Microsoft Learn has
a module to learn about automating Azure tasks here

In order to
understand the value of containers I believe it is important to know how we (as
an industry) got to this point. I find that understanding the history of
technology helps to explain the current situation. (It also makes it easier to
extrapolate out potential futures.) From the perspective of server
technologies, and hosting applications with an intention to scale them,
virtualization, and virtual machines have been a standard mechanism for the
longest time. If you have never used a VM (virtual machine) you can get some
perspective from following this lab and setting up a VM.

A Virtual Machine
abstracts the physical hardware upon which it runs. When you deploy an
application to a VM you should not care what the actual hardware is. Often a
physical server can be running multiple virtual machines, and this provides a
higher level of potential resource utilization. Considering that data canters
are starting to become a noticeable consumer of world power supplies, it should
be obvious that the more energy efficient we can be with our servers the better
we are conserving our global resources. Yes, virtualization might be
economically efficient, and it could be considered ecological more efficient
than running everything on physical hardware. A virtual machine will host an
operating system, and the software you deploy will need to run on that operating system.Virtual Machine technology is
what enables most of the worlds big Cloud providers to work. For a high level
understanding of how Azure works this video provides a good overview

A container provides a host to run a software application that
abstracts another level of concern away from the deployment. While Virtual
Machines provide an abstraction from the physical hardware, containers provide
an abstraction from the operating system.

By being abstracted
from the OS (operating system), there is no longer a need to be concerned about
the setup of the OS, how it hosts your application, where it has your
application installed, and numerous other issues. This enables a more agile
approach to be taken when deploying applications. An application running in a
container can be easily moved from one location to another, simply move the
container and everything you need will come with it.

Containers also
enable an even greater level of resource (energy) efficiency. When an
application is hosted in a virtual machine, then you scale the application by
creating copies of that virtual machine (scale out), or adding more computer
resources to the virtual machine (scale up). The virtual machine is the unit of
scaling. With a container, the unit of scaling is closer to the application,
you scale out your application by creating more instances of your container. As
a virtual machine can host multiple container instances you are going to be
getting more from the potential resources than if you only have the application
running once per virtual machine.

Containers are not
limited to specific types of application, or programming language either. Most
common languages and types of app can be containerized. A container can run on
a local machine, your laptop, a big data canter server, or an IOT device. This
means you can build and test containers locally and then deploy them to scale
with the confidence they will work the same way.

One of the most
popular container runtimes is called Docker. Docker provides container
compatibility between Mac, Windows, and Linux machines. Using Docker, an
application can be deployed to a local container, tested and then the container
image can be deployed to scale by creating multiple instances of that container
image on devices anywhere you desire (as long as it supports Docker).

When you create a
container (or Docker) image you are defining the contents of the container, You
might consider it a template definition of what the container will be running.
If you are a programmer then think about a container image as a class definition.
A container instance is a running version of the container image. Using the
programming metaphor again, the instance would be an object instantiated from
the image (class).

In order to manage
your containers and make best useof the
resources you will want to perform tasks such as scheduling, monitoring,
scaling, connectivity (networking), upgrades, and failure management. This is
where Kubernetes comes into play. Kubernetes is an orchestration tools for
containers. This means Docker and Kubernetes work together to deliver a
great container experience.

To understand the
basics of Containers and Kubernetes watch this video on Channel 9.

If you want to setup
Kubernetes on a machine (or virtual machine) there is a fair amount of work
that needs to be done, you need to set up routing tables, storage to support
your applications, etc… The great thing about using Kubernetes in Azure is the setup is all managed for you.

There is a tutorial that you can follow to get a simple website running in a Docker
container and then use AKS (Azure Kubernetes Service) to deploy and orchestrate
the containers. It should take you around an hour to complete and should help
you to better understand how Docker and Kubernetes work together using Azure
services.

If you do not need
the orchestration provided by Kubernetes and simply want a container running
your app in the Cloud the Azure Container Instances (ACI) simplifies the setup
process even further. You can think of the ACI service as providing 'serverless'
containers. You do not need to care about managing servers at all. This
tutorial will help you understand how to use Azure Container Instances

Monday, August 13, 2018

Over the last few weeks, we have written a series of articles focusing on the challenges that people deal with in the meeting room every day. These challenges are some of the things we are solving, as we progress with our mission to create the Intelligent Meeting Room.

Pay attention! The cost of losing focus.
How can we remove distractions and create high quality output?
Read more

Personal devices, who is in charge?
We should be the ones in control of when we look at our phone, not our phone deciding that we should look at it.
Read more

Connect me to my meeting.
How many times have you walked into a meeting only to find the presentation equipment requires an IT support person to make it work?
Read more

Show me my content.
How many times have you sat in a meeting and things just took too long?
Read more

Sticking to the agenda.
How many meetings do you attend each month, for which there is no clear agenda?
Read more

What did we decide?
Have you ever been in a meeting where great decisions are made, only to realise a week later that a number of the decisions have been lost or forgotton?
Read more

Make that meeting an email.
How many times have you sat in a meeting and realized that the only reason you are there is because someone has a couple of facts to share, that could easily have been sent in an email?
Read more

Meetings, now and then.
What has really changed in how we communicate ideas and come up with plans?
Read more

The environment for getting things done.
How many times have you walked into a space and immediately felt more relaxed or happier?
Read more

The strange thing about privacy.
Is it possible that removing privacy from many scenarios improves the situation for the group of people involved?
Read more

Sunday, August 12, 2018

In continuation to a previous blog post Test Driven Development, we would like to discuss the Defensive Coding approach in this blog post.

What?
Defensive programming is an approach to improve software quality by “Making the software behave in a predictable manner despite unexpected input or user actions". The software's behavior should be consistent even in undesirable conditions.

When?
Defensive programming techniques are used especially when a piece of software could be misused mischievously, or inadvertently, to cause a catastrophic effect, which is likely to be the case.

Why?
One of the compelling reasons to perform defensive coding is that catching exceptions is computationally expensive. It is useful to follow techniques that allow the program to continue by gracefully handling the exceptional conditions, without throwing an exception.
Defensive coding also reduces the number of bugs and ensures code correctness.

How?
The most widespread practice is to use guard statements like:

Null Checks

Pre-conditions

Assertions

- Do not repeat the guard statements. We often tend to repeat code to perform validations. This repeated guard usage can lead to primitive obsession and wasted computational cycles.
- In such cases, it is always better to either include abstractions to perform the validations or extract the duplicates into separate and reusable checks.
- Since these validations are very crucial and spread across the code base, they should be kept intact. It also helps us to adhere to the DRY (Don’t Repeat Yourself) principle.
- Always wrap a third party library usage with our own gateways or proxies.

Sunday, August 5, 2018

Setting up Hyper-V Server 2016 as Host, and connecting a Client computer with Hyper-V Manager in Windows 10

Scenario: You want to use Hyper-V Manager, with a Windows 10 Pro client computer, to connect with to Windows Hyper-V Server 2016, which is a Workgroup computer (non-domain joined).

PrerequisitesHOST Server
- A bootable USB with a Hyper-V Server 2016 ISO Image. You can download it for free from here.
- Target computer to be your Hyper-V Server.
- One (1) Terabyte of storage disk space. This very much depends on what you want to do with your server, though we will assume you want to use it for Virtual Machines.
- Minimum 4GB RAM (assuming you are going for Virtual Machines, though even then recommended at least 8GB).
- Network connection to server.
- Keyboard and mouse plugged into your server computer.

3. You will see a message: Press any key to boot from CD or DVD. Press any key (such as enter).

4. The Install Microsoft Hyper-V Server Wizard will begin, where you will need to provide the Language to install, your Time and currency format, and your preferred Keyboard or input method; click Next when you are done:

5. Setup will begin (with a Setup is starting message), then you will be prompted to agree to the licensing terms by checking the checkbox and clicking Next.

6. For the purpose of this tutorial, we are setting up the server for the first time (rather than upgrading), so on the next window, select the selection Custom: Install the newer version of Hyper-V Server only (advanced).

7. On the next window, you will need to choose which Drive you wish to install onto. If you have multiple drives, you will need to select one to be the primary drive.

8. If your drive is NOT empty, now is the time to Format it: do this by selecting it within the inner window and click the Format button. Be aware, you will only need about eighty (80) gigabytes for Hyper-V Server 2016 to run well, so consider creating a new partition of this size, and selecting that.

9. With your drive ready to go, click Next. Hypervisor will be installed locally, which may take some time (10-30 minutes, depending on your computers power). Once finished, the server will reboot and load hypervisor for the first time:

10. Upon rebooting, the Hyper-V Server will scan available hardware and load Windows drivers.

11. When it is all ready, you will be prompted to set an Administrator password. Use the arrow keys to navigate to OK, and then press the enter key to continue.

12. Type your new password; it needs to have a level of complexity, so try lower and upper case letters, numbers, and a special character (!@#$%^&()?/_). This suggestion is for your security too, so set something reasonably strong. Press enter once done.

13. You will be prompted that your password has been changed, once you have set it well. Press the enter key on OK, to continue:

14. Two screens will be loaded: Server Configuration and a CLI (behind it):

a. Server Configuration screen (blue):
i. This screen allows the administrator to make changes to the most common and necessary settings to ensure the server runs properly.
ii. Make changes by entering the corresponding number next to the setting you wish to modify. You will then be prompted to enter the change to be made. Depending on the change, a reboot may be triggered, which you must do now or postpone.

b. Command-line interface (CLI) (black):

i. Once you have exited the Server Configuration screen, you will be taken to the CLI screen. You can load PowerShell from here, and also load back into the Server Configuration screen:
- PowerShell: powershell.
-Server Config: C:\Windows\System32\Sconfig.cmd

15. You must now enable Hyper-V Server remote management. This is accomplished through using the Server Config screen:
a. Type: 4, and press enter. This will open the settings for Remote Management.
b. Type: 1, to open config for Remote Management.
c. Type: E, and press enter, to enable Remote Management.
d. Also within Remote Management, type: 3, to enable Remote Ping. Click Yes, when prompted.
e. Navigate back to Server Config.
f. Next, type: 7, and press enter. This will open the settings for Remote Desktop.
g. Type: E, and press enter.
h. Then type: 1. This will enable remote desktop (you will get a prompt) about the level of security. For this example, we are using more secure, though choose what suits your setup.
i. A reboot may be required.

16. Once rebooted, open the hypervisor again (Ctrl + Alt + Del). Then, type: 14, to exit to command-line (CLI).

17. Within the CLI, type: powershell, and press the enter key.

18. PowerShell will begin within the CLI (you will be able to see this, as Windows PowerShell will be printed as a message, before providing the cursor back to you).

25. The next command will allow other computers to access your Host server drive (important when install you ISOs). It is recommended, that once you have copied your ISOs to the local drive, to disable this.

26. Last, you need the IP Address of the server, and to note down your server FQDN (also known as your fully qualified domain name:
- Navigate back to the Server Config screen, by typing: sconfig and pressing the enter key.
- Type: 8 in Sever Config, and press the enter key, to open the Network Settings section, and note down your IP Address and the server name (FQDN); you can edit the server name from Server Config also, just ensure you note it down, and reboot after the update.

d. In PowerShell, type: Enable-PSRemoting -SkipNetworkProfileCheck.
- The above parameter is added, so that you do not need to go through your computer, changing all your networks to Private or Domain. Though, be aware, that the above setting means that only connections on the same subnet as your computer, will be able to connect (which is fine, in this case, as the server Host is on the same network).
e. Next, within PowerShell, Type: gpedit and press the enter key. The Local Group Policy Editor will open, where you need to update a setting:

a. Navigate through: Computer Configuration > Administrative Templates > System > Credentials Delegation.
b. Click Credentials Delegation, and in the window to the right, find Allow delegating fresh credentials with NTLM-only server authentication. Double-click to open.
c. Change the configuration to Enabled, by clicking it. Then, within the Options box below that, click the newly enabled Show... button.
d. The Show Contents popup window will open. Click the first row, second column, under the Value header, and type: wsman/host-server-fqdn (i.e. wsman/SERV01). Ensure that you place "wsman/" before the Host name.
e. Click OK on the Show Content Window, and then lastly click Apply on the Allow delegating fresh credentials with NTLM-only server authentication window. Make sure to leave gpedit open for the next step.

6. With gpedit still open, you also need to set the Encryption Oracle Remediation:
a. Navigate through: Computer Configuration > Administrative Templates > System > Credentials Delegation > Encryption Oracle Remediation.
b. It will likely be set as Not configured. Update this to Enabled.
c. Once enabled, within the Options box below, click the Protection Level drop-down box, and update it to Vulnerable.
d. Click Apply once set, and then OK.

10. Type: Y when prompted to enable CredSSP authentication, then press the enter key, to accept.

11. Type: MOFCOMP %SYSTEMROOT%\System32\WindowsVirtualization.V2.mof, and press the enter key, to accept parsing the Virtualization (V2) file, using the MOF compiler.

12. Add the remote login next, by typing: cmdkey /add:FQDN /user:Administrator /pass:p@ssword1, (where you need to change FQDN with your Host server name, provide your user name - probably Administrator - and then provide your password for that user). Press the enter key to register the login information, which is necessary to connect with the Hyper-V Server.

13. As an additional OPTIONAL step, to help you copy your ISOs to your server Host, map the Host drive to your client computer, by typing: Net use \\FQDN\c$. Be aware, 'c$' here, denotes the main drive of the server. This may vary, depending on your set up.

14. Open Hyper-V Manager, and click the Hyper-V Manager text, located in the left-side panel, to select it.

15. With that selected, the far-right panel will provide the option to Connect to Server...: click this, and a popup window called Select Computer will appear.

16. With the Another Computer: radio button selected, type in the FQDN of your Host server, and press the enter key. You should NOT tick Connect as another user.

It is possible to completely automate the above, client computer, setup for PowerShell, so the next worthwhile step for you to take, is to create that. Though, keep in mind that any edits to the registry need you to reboot your client computer, to take effect.

Thursday, July 19, 2018

Ever wanted to be a cartoon? Where there is a will, there is a way! Today, we are going to show you how you can take any image, and turn it into a cartoon style graphic using Adobe Illustrator.

1. Open up a new Illustrator file. The dimensions for this tutorial are 1296px by 864px, as they are the dimensions of the image we will be using. You can choose whichever dimensions you like. Click Create once you have selected your dimensions.

2. Once the new file loads, press Cmd+Shift+P on Mac, or Ctrl+Shift+P on Windows on your keyboard, to open up a file directory to access your image. Alternatively, to find the same menu you can go to File > Place.

3. Locate the image in the file directory, then click Place.

4. Place and resize the image to where you would like it on the Artboard.

5. Go to Window > Image Trace to open the Image Trace panel.

6. Now for the Fun! Select the image, and look over to the Image Trace panel we just opened.

7. Click the Preset drop down menu. These are all the settings for the image tracing, of which some will give you further options to tweak once selected. For this tutorial, select Sketched Art.

8. Under Preset, is the View drop down menu. Select Tracing Result.

9. Under View, is the Mode drop down menu. This is where you can select the colour mode. For this tutorial, we are going to select Colour.

10. You will then have Palette options beneath View. We find the most cartoon-like option to be Limited, but feel free to see which suits your image best.

11. You can reduce the amount of colours in the Colors option beneath Palette. For this example, we have set it to 8.

12. Once you have played with the settings and are happy with the image, change View to Tracing Result with Outlines. This gives us outlines of where we are about to colour in.

13. Click the image and go to Object > Expand. Ensure Fill and Stroke are selected, then click OK.

14. Once your object is expanded, colour away! Click a shape you would like to fill, and select a colour you would like.

15. Once you’re done, don’t forget to save the Illustrator file. Go to File > Save As, and navigate to a location on your device.

16. To export your image, go to File > Export > Export As, and navigate to a location on your device.
17. Select the format you wish to export the image as, from the Save as type drop down menu. For this example, we will select PNG. If you have selected PNG, a PNG Options panel will appear. Select the Resolution, Anti-aliasing, and Background Color that suits you, and click OK.

Tuesday, July 10, 2018

In a previous post Adding AI to Mixed Reality, we
discussed how we have been helping developers add Artificial Intelligence (AI) to their mixed reality applications.

The hands-on-labs from these events are now public on the Microsoft Mixed Reality Academy. These labs teach you how to add Microsoft Cognitive Services and Machine learning to a Unity application running on HoloLens, or in Microsoft's Mixed Reality Portal.

One of the attendees of a recent event in Redmond wrote about his experiences at a week-long hack, where teams were formed around specific scenarios. You can read Ryan's thoughts on the event here.

We have been working hard to make this training material available to everyone. To help you get started with mixed reality development and design, you can find our Kindle eBook on
Amazon, Developing and Designing for Mixed Reality.

Sunday, July 8, 2018

Some programming situations require handing off data processing sub-tasks to different processes, rather than to background threads; the ways in which you can do that are discussed in this blog.

In many programming languages, on many types of CPUs, a single process (or thread) runs on a single CPU core. Threads created by a process, share time on the single CPU core with the main-line of the program. If you have a multi-processor CPU, your application is going to leave all the other cores in the CPU idle if your application is organised as a single process with threads. That might not matter on high powered machines, but it can become an issue on small, slower, devices (e.g. IOT devices, or mobile phones) that might have to do serious data crunching: perhaps in a signal processing, or AI machine learning environment.

Let’s look at some ways of making an application run on multiple CPU cores. For this blog we will use Python, although in a Linux environment most languages have inherited equivalent methods all deriving from the original C implementations.

In Python, there are basically two ways of starting another application from within an initial parent application. You can use the “Popen” method from the “subprocess” module, or the “fork” method from the “os” module. Either method results in a separate process running alongside the parent process, and hopefully, the OS will assign the new process to a CPU core of its own. Each process may need to use the “os” method, “nice”, to set its priority high enough to get a core, or the Linux shell command “taskset” might be required to set the affinity of a process to a specific CPU core.

Using “Popen”, or “fork”, will depend on the amount, and type, of data you need to pass to the child process from the parent process. “Popen” passes command line parameters to the child (which is launched from the file referenced in the command line parameters). On the other hand, “fork” causes the parent process to be duplicated and continue executing from the fork point in the code. This makes “fork” a very interesting mechanism. Let’s look at the code in a typical “fork” scenario:

.. application sets up data, class instances, and so on, ready for use by both parent and child.some_data_to be passed_on = [5, 6, 7, 8]# once the common code is done, we are ready to split into 2 processes
line 1 parent_pid = os.getpid()
line 2 fork_pid = os.fork()
line 3 if fork_pid == 0:# this branch is running the child process# the fork method returns 0 in the child process# we can find out our pid as the child process
line 4 child_pid = os.getpid()# now we add code which does the tasks required of the child
line 5 else:# this branch is running in the parent process, the assert below will pass
line 6 assert os.getpid() == parent_pid# now we add code which does the tasks required of the parent
line 7 # statements following the “if” “else”

The fork operation can be confusing to use, even though it has some extremely useful characteristics, so we need to go through the numbered lines above to make things clear.

At line 1, a variable is set to the process id (pid) of the parent process, which has run through its code down to line 1. At line 2, the fork operation is invoked. The fork copies the current application (which we are calling the parent here) including its working memory, its current stack, and the state of all its open files, and lets the copied application (which we are calling the child process) run from line 3. The original process is still alive, and it also moves on to line 3.

If both the parent and child are the same, and execute the same code, how will the child do something different to the parent? The answer is in the value returned by the fork operation, which is being examined by both the parent and child, at line 3. In the child process, the fork operation returns zero while in the parent, the value returned is non-zero. The outcome is that the child process will execute line 4, after line 3, while the parent process will follow the “else” at line 5 and execute line 6 after line 3.

When there is a need to pass the child process a great deal of information, forking provides the opportunity for any amount of data to be prepared. From the example above, both the child and parent inherit the local variable “some_data_to be passed_on” and its value among all the other variables local and global.

In summary, the fork operation is a very powerful way of passing lots of data to a child process, but it requires some care in its use. BOTH processes (unless they terminate early) will arrive at line 7. Depending on the application design, it may be perfectly fine for both processes to proceed through the code following line 7, but on the other hand it might be an unexpected problem. It’s an unusual mindset for a programmer to look at a single piece of code from the point of view of two (or even more) processes and keep track of what should happen in them at the same time.