Just another WordPress.com weblog

Menu

Post navigation

With the latest Windows 10 Insider Build of 16226, Microsoft introduced a new feature in Hyper-V to allow easy sharing of VMs amongst users. To share a VM, connect to its console in Hyper-V Manager and click the Share button as seen below:

You will then be prompted to select a location to save the compressed VM export/import file with the extension vmcz (VM Compressed Zip perhaps?). Depending on the VM size, that might take a little while. If you want to check what’s in that export file, you can simply rename append .zip to its file name and open it either with Explorer or your favorite archive handling application. As you can see below, the structure is fairly familiar to anyone using Hyper-V:

You can find the VM hard disk drives (.vhd or .vhdx), its configuration file (.vmcx) and the run state file (.vmrs). So, there’s really no magic there! It creates a nice clean package of all the VMs artifact to easily send it around.

One thing I would like to see in future build is to trigger this process in other ways in Hyper-V Manager as it’s oddly missing from the VM right action pane and the right click contextual menu of the VM. Maybe that’ll come in future builds. I also couldn’t find a way to trigger this in PowerShell yet.

Once your friend has the vmcz file in hand, they can simply double click on it to trigger the import. In the background, the utility C:\Program Files\Hyper-V\vmimport.exe is called. Unfortunately on my test laptop, the import process bombs out as seen below:

I suspect one has only to type a name for the VM that will be imported and click Import Virtual Machine. Those kind of issues are to be expected when you’re in the Fast ring for the Insider Builds! I’m sure that will turn out to be a useful feature for casual Hyper-V users.

Like this:

A little while ago, I had to take a deep dive into hardware statistics in order to troubleshoot a performance bottleneck. In order to achieve this, I ended up using Intel Performance Counter Monitor. As one cannot simply download pre-compiled binaries of those tools, I had to dust off my mad C++ compiler skills. You can find the compiled binaries I did here as part of the GEM Automation latest release to save you some trouble. You’re welcome! 🙂

In order to use those tools, simply extract the GEM Automation archive to a local path on the machine you want to monitor. You can change the current working directory to:

Here’s an overview of each of the exe in the directory and a sample output of each. Do note that you can export data to a CSV file for easier analysis. It seems to also include more metrics when you output the data that way.

Like this:

I recently spent some time experimenting with GPU Discrete Device Assignment in Azure using the NV* series of VM. As we noticed that Internet Explorer was consuming quite a bit CPU resources on our Remote Desktop Services session hosts, I wondered how much of an impact on the CPU using a GPU would do by accelerating graphics through the specialized hardware. We did experiments with Windows Server 2012 R2 and Windows Server 2016. While Windows Server 2012 R2 does deliver some level of hardware acceleration for graphics, Windows Server 2016 did provide a more complete experience through better support for GPUs in an RDP session.

In order to enable hardware acceleration for RDP, you must do the following in your Azure NV* series VM:

Download and install the latest driver recommended by Microsoft/NVidia from here

This scenario worked fine in both Windows Server 2012 R2 and Windows Server 2016

Here’s what it looks like when you run this demo (don’t mind the GPU information displayed, that was from my workstation, not from the Azure NV* VM):

Microsoft Fish Tank page which leverages WebGL in the browser which is in turn accelerated by the GPU when possible

This proved to be the scenario that differentiated Windows Server 2016 from Windows Server 2012 R2. Only under Windows Server 2016 could high frame rate and low CPU utilization was achieved. When this demo runs using only the software renderer, I observed CPU utilization close to 100% on a fairly beefy NV6 VM that has 6 cores and that just by running a single instance of that test.

In order to do a capture with Windows Performance Recorder, make sure that GPU activity is selected under the profiles to be recorded:

Here’s a recorded trace of the GPU utilization from the Azure VM while running FishGL in Internet Explorer that’s being visualized in Windows Performance Analyzer:

As you can see in the WPA screenshot above, quite a few processes can take advantage of the GPU acceleration.

Here’s what it looks like in Process Explorer when you’re doing live monitoring. As you can see below, you can see which process is consuming GPU resources. In this particular screenshot, you can see what Internet Explorer consumes while running FishGL my workstation.

Like this:

*Disclaimer* This is only an idea I’ve been toying with. It doesn’t represent in any way, shape or form future Microsoft plans in regards to memory/storage management. This page will evolve over time as the idea is being refined and fleshed out.

**Last Updated 2017-03-23**

The general ideal behind Distributed Universal Memory is to have a common memory management API that would achieve the following:

For instance, if data is rarely used by the application, several data reduction techniques could be applied such as deduplication, compression and/or erasure coding

If data access time doesn’t require redundancy/locality/tolerates time for RDMA, it could be spread evenly across the Distributed Universal Memory Fabric

High Level Cluster View

Components

Here’s an high level diagram of what it might look like:

Let’s go over some of the main components.

Data Access Manager

The Data Access Manager is the primary interface layer to access data. The legacy API would sit on top of this layer in order to properly abstract the underlying subsystems in play.

Transport Manager

This subsystem is responsible to push/pull the data on the remote host. All inter-node data transfers would occur over RDMA to minimize the overhead of copying data back and forth between nodes.

Addressing Manager

This would be responsible to give a universal memory address for the data that’s independent of storage medium and cluster nodes.

Data Availability Manager

This component would be responsible to ensure the proper level of data availability and resiliency are enforced as per defined policies in the system. It would be made of the following subsystems:

Availability Service Level Manager

The Availability Service Level Manager’s responsibility to to ensure the overall availability of data. For instance, it would act as the orchestrator responsible to trigger the replication manager to ensure the data is meeting its availability objective.

Replication Manager

The Replication Manager is responsible to enforce the right level of data redundancy across local and remote memory/storage devices. For instance, if 3 copies of the data must be maintained for the data of a particular process/service/file/etc. across 3 different failure domains, the Replication Manager is responsible of ensuring this is the case as per the policy defined for the application/data.

Data History Manager

This subsystem ensure that the appropriate point in time copies of the data are maintained. Those data copies could be maintained in the system itself by using the appropriate storage medium or they could be handed of to a thrid party process if necessary (i.e. standard backup solution). The API would provide a standard way for data recovery operations.

Data Capacity Manager

The Data Capacity Manager is responsible to ensure enough capacity of the appropriate memory/storage type is available for applciations and also for applying the right capacity optimization techniques to optimize the physical storage capacity available. The following methods could be used:

Compression

Deduplication

Erasure Coding

Data Performance Manager

The Data Performance Manager is responsible to ensure that each application can access each piece of data at the appropriate performance level. This is accomplished using the following subsystems:

Latency Manager

This is responsible to place the data on the right medium to ensure that each data element can be accessed at the right latency level. This can be determined either by pre-defined policy or by heuristic/machine learning to detect data access pattern beyond LRU/MRU methods.

The Latency Manager could also monitor if a local process tends to access data that’s mostly remote. If that’s the case, instead of generally incurring the network access penalty, the process could simply be moved to the remote host for better performance through data locality.

Service Level Manager

The Service Level Manager is responsible to manage the various applications expectations in regards to performance.

The Service Level Manager could optimize data persistence in order to meet its objective. For example, if the local non-volatile storage response time is unacceptable, it could choose to persist the data remotely and then trigger the Replication Manager to bring a copy of the data back locally.

Data Variation Manager

A subsystem could be conceived to persist a tranformed state of the data. For example, if there’s an aggregation on a dataset, it could be persisted and linked to the original data. If the original data changes the dependent aggregation variations could either be invalidated or updated as needed.

Data Security Manager

Access Control Manager

This would create hard security boundary between processes and ensure only authorized access is being granted, independently of the storage mechanism/medium.

Encryption Manager

This would be responsible for the encryption of the data if required as per a defined security policy.

Auditing Manager

This would audit data access as per a specific security policy. The events could be forwarded to a centralized logging solution for further analysis and event correlation.

Data accesses could be logged in an highly optimized graph database to allow:

Build a map of what data is accessed by processes

Build a temporal map of how the processes access data

Malware Prevention Manager

Data access patterns can be detected in-line by this subsystem. For instance, it could notice that a process is trying to access credit card number data based on things like regex for instance. Third-party anti-virus solutions would also be able to extend the functionality at that layer.

Legacy Construct Emulator

The goal of the Legacy Construct Emulator to is to provide to legacy/existing applications the same storage constructs they are using at this point in time to ensure backward compatibility. Here are a few examples of constructs that would be emulated under the Distributed Universal Memory model:

Block Emulator

To emulate the simplest construct to simulator the higher level construct of the disk emulator

Disk Emulator

Based on the on the block emulator, simulates the communication interface of a disk device

File Emulator

For the file emulator, it could work in a couple of ways.

If the application only needs to have a file handle to perform IO and is fairly agnostic of the underlying file system, the application could simply get a file handle it can perform IO on.

Otherwise, it could get that through the file system that’s layered on top of a volume that makes use of the disk emulator.

Volatile Memory Emulator

The goal would be to provide the necessary construct to the OS/application to store it’s state data that’s might be typically stored in RAM.

One of the key thing to note here is that even though all those legacy constructs are provided, the Distributed Universal memory model has the flexibility to persist the data as it sees fit. For instance, even though the application might think it’s persisting data to volatile memory, the data might be persisted to an NVMe device in practice. Same principle would apply for file data; a file block might actually be persisted to RAM (similar a block cache) that’s then being replicated to multiple nodes synchronously to ensure availability, all of this potentially without the application being aware of it.

Metrics Manager

The metrics manager is to capture/log/forward all data points in the system. Here’s an idea:

Availability Metrics

Replication latency for synchronous replication

Asynchronous backlog size

Capacity Metrics

Capacity used/free

Deduplication and compression ratios

Capacity optimization strategy overhead

Performance Metrics

Latency

Throughput (IOPS, Bytes/second, etc.)

Bandwidth consumed

IO Type Ratio (Read/Write)

Latency penalty due to SLA per application/process

Reliability Metrics

Device error rate

Operation error rate

Security Metrics

Encryption overhead

High Level Memory Allocation Process

More details coming soon.

Potential Applications

Application high availability

You could decide to synchronously replicate a process memory to another host and simply start the application binary on the failover host in the event where the primary host fails

Bring server cached data closer to the client

One could maintain a distributed coherent cache between servers and client computers

Move processes closer to data

Instead of having a process try to access data accross the network, why not move the process to where the data is?

User State Mobility

User State Migration

A user state could move freely between a laptop, a desktop and a server (VDI or session host) depending on what the user requires.

Remote Desktop Service Session Live Migration

As the user session state memory is essentially virtualized from the host executing the session, it can be freely moved from one host to another to allow zero impact RDS Session Host maintenance.

Decouple OS maintenance/upgrades from the application

For instance, when the OS needs to be patched, one could simply move the process memory and execution to another host. This would avoid penalties such as buffer cache rebuilds in SQL Server for instance which can trigger a high number of IOPS on a disk subsystem in order to repopulate the cache based on popular data. For systems with an large amount of memory, this can be fairly problematic.

Have memory/storage that spans to the cloud transparently

Under this model it would be fairly straightforward to implement a cloud tier for cold data

Option to preserve application state on application upgrades/patches

One could swap the binaries to run the process while maintaining process state in memory

Provide object storage

One could layer object storage service on top of this to support Amazon S3/Azure Storage semantics. This could be implemented on top of the native API if desired.

Provide distributed cache

One could layer distributed cache mechanisms such as Redis using the native Distributed Universal Memory API to facilitate porting of applications to this new mechanism

Facilitate application scale out

For instance, one could envision a SQL Server instance to be scaled out using this mechanism by spreading worker threads across multiple hosts that share a common coordinated address space.

Like this:

With the recent announce of the new AMD “Napples” processor, a few things have changed in regards to options for Storage Spaces Direct. Let’s have a look to see what’s this new CPU is about.

A few key points:

Between 16/32 threads and 32 cores/64 threads per socket or up to 64 cores/128 threads in a 2 socket server

Intel Skylake is “only” expected to have 28 cores per socket (** Update 2017-03-19 ** There are now rumors of 32 cores Skylake E5 v5 CPUs)

2TB of RAM per socket

8 channel DDR4

Bandwidth is expected to be in the 170GB/s range

Intel Skylake is expected to only have 6 channel memory

128 PCIe 3.0 lanes PER socket

In 2 sockets configuration, it’s “only” 64 lanes that will be available as the other 64 are used for socket to socket transport

In other words for S2D, this means a single socket can properly support 2 x 100GbE ports AND 24 NVMe drives without any sorcery like PCIe switches in between

That’s roughly 126GB/s of PCIe bandwidth, not too shabby

Here’s an example of what it looks like in the flesh:

With that kind of horse power, you might be able to start thinking about having a few million IOPS per S2D node if Microsoft can manage to scale up to that level. Scale that out to the supported 16 nodes in a cluster and now we have a party going! Personally, I think going with a single socket configuration with 32 cores would be fine sizing/configuration for S2D. It would also give you a server failure domain that’s reasonable. Furthermore, from a licensing standpoint, a 64 cores Datacenter Edition server is rather pricey to say the least… You might want to go with a variant with less cores if your workload allows it. The IO balance being provided by this new AMD CPU is much better than what’s being provided by Intel at this point in time. That may change if Intel decides to go with PCIe 4.0 but it doesn’t look like we’ll see this any time soon.

If VDI/RDS SH is your thing, perhaps taking advantage of those extra PCIe lanes for GPUs will be a nice advantage. Top that with a crazy core/thread count and you would be able to drive some pretty demanding user workload without overcommitting too much your CPU and while also having access to tons of memory.

I’ll definitely take a look at AMD systems when Naples is coming out later this year. A little competition in the server CPU market is long overdue! Hopefully AMD will price this one right and reliability will be what we expect for a server. Since it’s a new CPU architecture, it might take a little while before software manufacturers support and optimize for this chip. With the right demand from customer, that might accelerate the process!

Like this:

In an effort to consolidate our diagnostic data in Elasticsearch and Kibana, one thing that came on my plate was to figure out a way to load the relevant SharePoint ULS log data in Elasticsearch to search and visualize it in Kibana. The process at a high level is the following:

Get a list of log files to load for each server based on the last incremental load.

Read each log files and exclude lines based on a predefined list of events

Partition the events based on the timestamp of the log event

This process is fairly important if you have a large number of events to be imported in Elasticsearch.

Partitioning the ULS log data by day will allow you simply drop the index in Elasticsearch for the data that is not relevant anymore. No need to query the index to find documents matching a specific retention period.

For instance, if you want to cleanup by month, you can use the following function from libElasticsearch.psm1 to drop the data from December 2016:

Create batches of events for each partition to facilitate the insertion in Elasticsearch

Send each batch of events to Elasticsearch

Once the load is finished, persist the date and time of the last file that was loaded as a checkpoint

In order to come up with a good list of events to be excluded, I iteratively loaded small batches of log files in Elasticsearch and visualized the number of event per event Id and Category. Looking at each of the events with the most occurences, I checked whether those events would be useful from a troubleshooting standpoint. SharePoint is quite chatty depending the the logging level that is set. There are a lot of events for things like process starting, running and completing. After a few hours of reviewing the events in our quality assurance environment, I ended up with a list of 271 events in the exclusion list and that’s still a work in progress as more data is coming in.

Now let’s get into the actual details of running this process.After you have downloaded and extracted the latest release of GEM Automation,the first thing that needs to be done is to populate a file named \SharePoint\SharePointServers.csv . The file is simply a comma delimited file that contains the list of servers for which you want to collect the logs along some additional information. Here’s what it looks like

The partition key and type are specified to make sure we spread all the event data in daily indexes to speed up querying in elasticsearch. Kibana is able to issue queries against multiple indexes at a time i.e. sharepointulslog-*.

We need pass the definition of the index as it will be used to create each of the index that will partition the data.

Add-ElasticsearchDocumentBatch will then call:

Partition-ElasticsearchDocument to split the ULS events in daily buckets

Create-ElasticsearchDocumentBatch to split the documents in the partition into batches that will be sent to Elasticsearch

If the index for the partition doesn’t exist it will get created at the beginning of the processing of the partition

The process is then repeated for each log files that requires processing

Before you can see the data in Kibana, you’ll need to configure a new index pattern:

Once that’s completed, you will now be able to create custom searches over that index pattern. Those searches can then be used to build specific visualization which in turn will be used to build a dashboard like the following (I apologize from the cramped screenshot, will create a better one soon):

You can now slice and dice millions of ULS log events in seconds using Kibana. For example, you could filter out your dashboard based on the Database event category to find when SQL Server connectivity issues are happening.

Another interesting aspect of sending that data in Elasticsearch is that will facilitate finding errors associated with a specific correlation ID. You can simply put correlationid:<correlation id> in the search bar in Kibana and the results will be returned as quickly as a few milliseconds.

If you have any questions about this, let me know in the comments below!

Like this:

A quick post to let you know that this week I did a couple of releases for GEM Automation that include a large number of changes as it includes all the commits that were done to the code repository since May.

At a high level there were changes to the following sections:

Active Directory

Build Deployment

Elasticsearch

Infrastructure Testing

Kibana

Networking

Service Bus

SharePoint

SQL Server

Storage Spaces

Utilities

Windows

For a detailed list of changes, please consult the release notes for build 4.0.0.0 and 4.0.0.1.

You can download the release 4.0.0.1 here which includes all the latest changes.