Cloud, Virtualisation & Managementhttps://cloudtidings.com
Azure, Hyper-V, System Center, Identity and Windows Server in generalThu, 15 Mar 2018 19:30:18 +0000enhourly1http://wordpress.com/https://s2.wp.com/i/buttonw-com.pngCloud, Virtualisation & Managementhttps://cloudtidings.com
Modernize your workload with #AKS #Kubernets #Containers #MicroServiceshttp://feedproxy.google.com/~r/VirtualisationManagementBlog/~3/4GJPie557vI/
https://cloudtidings.com/2018/03/16/modernize-your-workload-with-aks-kubernets-containers-microservices/#respondThu, 15 Mar 2018 18:16:04 +0000http://cloudtidings.com/2018/03/16/modernize-your-workload-with-aks-kubernets-containers-microservices/When comes to Application Modernisation, we can’t argue that Containers are leading the way. With Containers you can wrap up an application into its own isolated box meaning that app will have no knowledge of any other applications or processes that exist outside of its box.

With Containers, you can wrap up a monolithic application or create a more modern approach: a microservice-based architecture, in which the application is built on a collection of services that can be developed, tested, deployed, and versioned independently, which is perfect for mission-critical application scenarios.

If you own the app source code and are on a optimisation path, I would recommend the microservices approach, which allows agile changes and rapid iteration allowing you to change specific areas of complex, large, and scalable applications. But if do not have the source code or breaking the application code in to small pieces it is not feasible, you still can look at Containers as away to modernize the app. Either way, you also need to consider: Automation, Management, High-Availability, Networking, Scalability, Upgrades and Monitoring requirements.

Automating and Managing Containers:

The task of automating and managing a large number of containers and how they interact is known as orchestration. Azure offers two container orchestrators: Azure Container Service (AKS) and Service Fabric.

Azure Container Service (AKS) makes it simple to create, configure, and manage a cluster of virtual machines that are preconfigured to run containerized applications. This enables you to maintaining application portability through Kubernetes and the Docker image format.

Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers. Service Fabric addresses the significant challenges in developing and managing cloud native applications. Service Fabric reresents the next-generation platform for building and managing these enterprise-class, tier-1, cloud-scale applications running in containers.

Microsoft released a guide to help learn how you could move your existing .NET Framework server-applications directly to the cloud by modernizing specific areas, without re-architecting or recoding entire applications. You can download this eBook in multiple formats, too:

The book will be your best companion for day-to-day virtualization needs within your organization, as it takes you through a series of recipes to simplify and plan a highly scalable and available virtual infrastructure. You will learn the deployment tips, techniques, and solutions designed to show users how to improve VMM 2016 in a real-world scenario. The chapters are divided in a way that will allow you to implement the VMM 2016 and additional solutions required to effectively manage and monitor your fabrics and clouds. We will cover the most important new features in VMM 2016 across networking, storage, and compute, including brand new Guarded Fabric, Shielded VMs and Storage Spaces Direct. The recipes in the book provide step-by-step instructions giving you the simplest way to dive into VMM fabric concepts, private cloud, and integration with external solutions such as VMware, Operations Manager, and the Windows Azure Pack.

By the end of this book, you will be armed with the knowledge you require to start designing and implementing virtual infrastructures in VMM 2016.

The book has been updated to reflect the updates available on VMM 2016 1801 release.

Wondering how you could use Microsoft OMS to have a single view of the jobs’ status across multiple VMM instances?

Well, you can now deploy an open-source solution that can be included in your OMS workspace called Virtual Machine Manager Analytics . This solution brings in the job data of your on-premises VMM instances to the log analytics in OMS. VMM admins can then use this versatile platform to construct queries for searching the relevant data and creating data visualizations.

The Virtual Machine Manager Analytics solution comes with some built-in reports with preconfigured data visualizations so you can easily get started with frequently used queries, such as:

Distribution of failed jobs across VMM instances to easily scope down the broken instances.

Distribution of failures over time to find sudden spikes, and to help with correlating the cause and failures.

Distribution of failed jobs and errors to help with identifying the most error-prone jobs and the cause.

Distribution of the job runtime across different runs to identify the sluggish and error-prone jobs.

Additionally, the VMM jobs data in OMS Log Analytics can be correlated with the data from other OMS solutions for better debugging and auto resolution with Azure automation Runbooks

With the release of the update 1801 for System Center VMM 2016, configuration of guest clusters in SDN through VMM has undergone some changes.

With network controller in place, now VMs that are connected to the virtual network using SDN are only allowed to use the IP address that the network controller assigns for communication. Inspired by Azure networking design, VMM enables this feature by emulating the floating IP functionality through the Software Load Balancer (SLB) in the SDN.

IMPORTANT: Network Controller does not support floating IP addresses which are essential for technologies such as Microsoft Failover Clustering to work.

VMM supports guest clustering in SDN through an Internal Load Balancer(ILB) Virtual IP(VIP). Guesting clustering is managed through the SDN NC. Before you start, ensure you have set up SDN and deployed NC and SLB

The ILB uses probe ports which are created on the guest cluster VMs to identify the active node. At any given time, the probe port of only the active node responds to the ILB and all the traffic directed to the VIP is routed to the active node

Sources:

]]>https://cloudtidings.com/2018/02/19/vmm-2016-1801-release-configuration-of-guest-clusters-in-sdn-through-vmm-has-undergone-some-changes/feed/1virtualisationandmanagementenable-floating.pnghttps://cloudtidings.com/2018/02/19/vmm-2016-1801-release-configuration-of-guest-clusters-in-sdn-through-vmm-has-undergone-some-changes/6 most commom Hyper-V configuration mistakeshttp://feedproxy.google.com/~r/VirtualisationManagementBlog/~3/S04N3DOuQCc/
https://cloudtidings.com/2018/02/07/6-most-important-hyper-v-configuration-mistakes/#commentsWed, 07 Feb 2018 09:37:28 +0000http://cloudtidings.com/?p=3034Microsoft MVPs Dave and Cristal Kawula developed an eBook when you’ll find useful information about what not to do when Installing and Configuring Hyper-V .

This eBook focuses on the 6 most important Hyper-V configuration mistakes made today and how to avoid them. You’ll learn about:

]]>https://cloudtidings.com/2018/02/07/6-most-important-hyper-v-configuration-mistakes/feed/1virtualisationandmanagementhttps://cloudtidings.com/2018/02/07/6-most-important-hyper-v-configuration-mistakes/Key features of the new Microsoft Azure Site Recovery Deployment Plannerhttp://feedproxy.google.com/~r/VirtualisationManagementBlog/~3/I69fJXqx4To/
https://cloudtidings.com/2017/12/19/key-features-of-the-new-microsoft-azure-site-recovery-deployment-planner/#commentsTue, 19 Dec 2017 02:17:13 +0000http://cloudtidings.com/?p=3032Azure Site Recovery Deployment Planner is now GA with support for both Hyper-V and VMware.

Disaster Recovery cost to Azure is now added in the report. It gives compute, storage, network and Azure Site Recovery license cost per VM.

ASR Deployment Planner does a deep, ASR-specific assessment of your on-premises environment. It provides recommendations that are required by Azure Site Recovery for successful DR operations such as replication, failover, and DR-Drill of your VMware or Hyper-V virtual machines.

Some cool new features were release like the Windows 10 client management: You can now add Windows 10 client machines as connections in Honolulu, and manage them with a subset of tools in the “Computer Management” Solution.

a. Set up the BIOS on the machine to support virtualization: Configuring the BIOS boot order to boot from (PXE)-enabled network adapter as the first device.
b. Configure the BMC settings. Configure the logon credentials and IP address settings for the BMC on each computer.

Add resources to VMM library: Add a generalized virtual hard disk with an suitable OS to use as the base image, and driver files that will be added to the during installation of the OS.

Create a Run As account. In VMM create a Run As Account with permissions to access the BMC.

Create Physical Computer profiles: In the VMM library, create one or more physical computer profiles. These profiles include configuration settings, such as the location of the operating system image, and hardware and OS settings.

Now let’s have a look on the step by step to provision a Hyper-V host using Baremetal Deployment:

In Credentials and Protocol select the Run As account with permissions to access the BMC. In the Protocol list, click the out-of-band management protocol that your BMCs use. If you want to use Data Center Management Interface (DCMI), click Intelligent Platform Management Interface (IPMI). Although DCMI 1.0 is not listed, it is supported. Make sure the correct port is selected.

In Discovery Scope, enter the single IP address, the IP subnet, or the IP address range that includes the IP addresses of the BMCs

Note:

If you specify a single IP address, when you click Next, the computer is restarted.

If you specify an IP address range, when you click Next, information about the computer is displayed, and you can confirm that you specified the computer that you meant to.

4a. If you specified an IP subnet or IP address range the Target Resources page appears. Select the BMCs you want to provision as Hyper-V hosts.

In Provisioning Options, click a host group for new Hyper-V hosts. Select the physical computer profile you want to apply.

In Deployment Customization, provide information for each computer that you want to provision as a Hyper-V host:

Click on the Network Adapter (on the left) to modify the configuration, or fill in more information. You can specify the MAC address of the management NIC (not the BMC) and static IP settings for this network adapter.

To specify an IP address select a logical network and an IP subnet if applicable. If the selected IP subnet includes IP address pool, you can check Obtain an IP address corresponding to the selected subnet. Otherwise, type an IP address that’s within the logical network or its subnet.

Configure the adapter settings for each network adapter. You must specify any information that is missing for the adapters.

When all information for the listed BMC are completed, click Next.

In Summary, confirm the settings, and then click Finish to deploy the new Hyper-V hosts and bring them under VMM management.

Make sure that all steps in the job have a status of Completed.

To confirm that the host was added click Fabric > Servers > All Hosts > host group, and verify that the new Hyper-V host appears in the group.

Note: Nano Server is not a supported OS for infrastructure-related roles like Hyper-V. I recommend that you use Windows 2016 Core Server version

]]>https://cloudtidings.com/2017/11/30/automating-the-deployment-of-hyper-v-hosts-with-vmm-2016-with-baremetal-deployment/feed/1virtualisationandmanagementhttps://cloudtidings.com/2017/11/30/automating-the-deployment-of-hyper-v-hosts-with-vmm-2016-with-baremetal-deployment/Hyper-V Networking improvements: NAT. and what does it means to you?http://feedproxy.google.com/~r/VirtualisationManagementBlog/~3/FPH2FtxDRiE/
https://cloudtidings.com/2017/10/31/hyper-v-networking-improvements-nat-and-what-does-it-means-to-you/#commentsTue, 31 Oct 2017 04:42:05 +0000http://cloudtidings.com/?p=2746For many years I have been using Hyper-V in my laptop, which is specially useful considering a run many demos and from time to time I speak at conferences that requires you or to have 2 or 3 computers or to run virtualisation in your laptop.

But, to run some demos I needed network in my Virtual Machines, particularly internet connection, and in most cases that was not easy to accomplish. The trick I used to have: a Internal Virtual Switch assigned to all VM’s and a second External Virtual Switch assigned to a VM acting as a router, running Windows Routing and Remote Access Service, which as you would understand was undermining my demos, by consuming vital resources (memory, cpu…) that I could otherwise assign to VM’s that was actually the demo VM’s.

Other common way to have internet on the Virtual Machines were by creating Connection Sharing (ICS) to connect on a shared Connection.

Anyway, that is now past, as since Microsoft released Creators Update for Windows 10, you can now create a Hyper-V Virtual Switch with NAT support which enables VM’s to be isolated behind a single IP address assigned to the host. This means that you don’t need to setup an ICS or create a VM to act as a route anymore. Also as Sarah Cooley, Hyper-V PM, pointed out in her post, NAT networking is vital to both Docker and Visual Studio’s UWP device emulators and there are two significant improvements to NAT brought by Windows 10 Creators update :

You can now use for multiple NAT networks (internal prefixes) on a single host.

You can build and test applications with industry-standard tooling directly from the container host using an overlay network driver (provided by the Virtual Filtering Platform (VFP) Hyper-V switch extension) as well as having direct access to the container using the Host IP and exposed port.

BTW, the process is done by using PowerShell. There is no UI for that. In fact, when you create a NAT Virtual Switch, it will appears as Internal Switch in the Hyper-V UI

To create the NAT Virtual Switch:

Open the PowerShell console with Admin rights and create an Internal Virtual Switch. In the example below, I am naming the Virtual Switch “vNAT”. You can choose the name you want.

New-VMSwitch-SwitchName “vNAT”-SwitchTypeInternal

After creating the Virtual Switch, you need to configure the NAT gateway. This IP address must be from a new range, which will be defined in the next step. Notice the name of the Interface Alias, which is composed by the prefix “VEthernet ” plus the name of the Virtual Switch created in the previous step enclosed in brackets. I am assigning the IP address 10.1.3.1 as a NAT Gateway IP and using 24 as prefix length (255.255.255.0) which would cater for 254 VM’s.

You can check if the IP addresses for the NAT default Gateway were assigned by typing:Get-NetIPAddress -InterfaceAlias “vEthernet (vNAT)”

The next step is to define the NAT network name and its IP address range, that the VM’s with the assigned Virtual Switch will run on. Make sure the IP address created in the previous step is on the range of this network.

4. The next step is to assign the NAT Virtual Switch to the VM’s to use the NAT virtual switch. You can do that by using PowerShell or the UI.

The final step is to assign an IP address to the VM’s. You will need to manually configure the network settings for the VM, as the built-in NAT switch doesn’t include a DHCP server for automatic IP address assignment. Assign the default gateway IP address of the private network to the internal VM switch Management Host vNIC.

Note: When the endpoint is attached to a container, the Host Network Service (HNS) allocates and uses the Host Compute Service (HCS) to assign the IP address, gateway IP, and DNS info to the container directly.

Note: If you require automatic IP address assignment to your VM’s, it can be easily accomplished by adding a DHCP server role to one of the VM’s. In my case, I added the DHCP role to the Domain Controller VM.

Important: To access the VM’s from the external network, you will need to create NAT rules, translating an external TCP/UDP port on the external interface to the NAT Virtual Switch port.

]]>https://cloudtidings.com/2017/10/31/hyper-v-networking-improvements-nat-and-what-does-it-means-to-you/feed/1virtualisationandmanagementWinRouter_Fig4.JPGhypervnat5hypervnat0.PNGhypervnat1hypervnat2.PNGhypervnat7.PNGhypervnat3hypervnat4.PNGhypervnat6.PNGhypervnat8.PNGhttps://cloudtidings.com/2017/10/31/hyper-v-networking-improvements-nat-and-what-does-it-means-to-you/Extending Microsoft OMS to monitor Squid Proxy running in Linux with a plugin – part 1/3 #MSOMShttp://feedproxy.google.com/~r/VirtualisationManagementBlog/~3/pbqmZzG6Y8Y/
https://cloudtidings.com/2016/11/24/extending-microsoft-oms-to-monitor-squid-proxy-running-in-linux-with-a-plugin-part-13-msoms/#commentsWed, 23 Nov 2016 23:54:07 +0000http://cloudtidings.com/?p=2653Since Microsoft released OMS, I have been an early adopter and evangelist for the solution. Not only it is simple to deploy but it gives you a full spectrum of many of the workloads you have either on-premises or in the cloud and it does not matter which cloud. Be it Azure, AWS, Google and many others.

So, as I was advising on OMS for a customer, I found that they were running Squid Proxy servers. The Squid proxy server is one of the most famous proxy servers in the world and it has been utilised for years in many organisations. For that reason I then I decided to look at how OMS could leverage the monitoring for Squid.

But, there was no Squid plugin and that’s where I brought back my past years of experience as a developer and although that was a long, long time go, I was able to developer in ruby a Squid plugin for Microsoft OMS.

How I developed it?

PART 1 : LOG Files

I started but investigating the squid log on /var/log/squid/access.log and then I research REGEX expressions to extract information out of it. Below is a extract of it

# enhanced parse log with date format , which pass the path for the log to the SquidLogParser and tag it as oms.api.Squid. By doing this, you will end up with 11 custom fields in OMS for the LOG TYPE Squid_CL