It might be that networking has received more attention in Windows Server 10 than any other area. There are new and enhanced features in Software Defined Networking (SDN), a whole new Network Controller role, and changes to DNS server, client, and DHCP.

As with several other enhancements in Windows Server vNext, this one has its roots in Azure. This “Swiss army knife” server role orchestrates and manages both physical network components (routers, switches, other hardware appliances) and virtual networks (the Hyper-V extensible switch, virtual network appliances).

It communicates “upward” to Virtual Machine Manager, Operations Manager, and Azure Pack and manages “downward” to the physical layer. It is highly scalable (just add more nodes for increased capacity) and continuously available in an active-active configuration. Although Network Controller is available as a role in the Technical Preview, there’s no GUI yet. Another sign that this Technical Preview is very early is that the presentations at TechEd Europe all used conceptual videos to demonstrate concepts—not live demos.

Several competing virtual networking implementations and overlay protocols exist on the market—such as VXLAN, STT, and NVGRE—that allow virtual networks to run on top of physical networks. Future “merging” of these protocols is planned in the Geneve initiative; meanwhile, Microsoft is clearly making the right move now by supporting VMware’s VXLAN along with NVGRE.

The Network Controller brings numerous different features to the table, such as a Software Load Balancer (SLB), virtual firewall, Hyper-V Network Virtualization Layer 2 and Layer 3 gateway, and site-to-site and VPN gateways. Collectively, these are known as Virtual Network Functions (VNFs), and Microsoft is also opening up the platform for third-party virtual appliances.

The Software Load Balancer is a policy-driven function of the Network Controller that intelligently routes incoming traffic to your (possibly multi-tier) services. The response traffic is, however, sent directly to clients, thereby reducing the risk of the SLB becoming a network bottleneck.

The Virtual Edge Gateway was introduced in Windows Server 2012 R2 as a way to link Hyper-V virtual networks to other networks; in this version, this gateway has been enhanced to support high-speed networking. It also supports more nodes than the two in 2012 R2 (which only had one active and one passive node) and can fail over from site to site. This last bit is because of a new BGP Route Reflector service that can pick up on dynamic changes in routing topology so that different VNFs can be kept up to date.

Finally, the Distributed Data Center firewall protects virtual networks, both for traffic flowing in and out of the datacenter as well as for traffic between VMs. Policies can be set on the Network Controller and distributed through a plug-in for the extensible Hyper-V switch.

The Network Controller uses SNMP, MAC, and ARP address discovery, route tables, and LLDP to discover network topologies and fault domains. The controller then validates what it learned and keeps checking for changes to keep the information up to date.

Another feature straight out of Azure is the new Network Performance and Diagnostics Service (NPDS), which builds on the network discovery. Whereas Operations Manager already has pretty in-depth network monitoring, it doesn’t do active health monitoring. A new server feature called Canary Network Diagnostics, which you run inside VMs, works with the controller to actively assess the health of network connections by measuring packet loss and latency within each fault domain. Because the controller is aware of fault domains, the controller can perform impact analysis when a network link fails to figure out what action needs to be implemented. This monitoring is also smart enough to suppress alerts for short-lived issues (less than .1s) so as not to overwhelm operators with unnecessary information.

The DNS client in both Windows 10 client and server will now function more smoothly on systems with multiple NICs because it will bind to the same interface that has a specific DNS server configured for name resolution—unless the DNS server is configured through Group Policy/Name Resolution Policy Table (NRPT), commonly used with Direct Access.

The major change to DHCP is that Network Access Protection (NAP) is no longer going to be enforced, simply because NAP itself is being deprecated in the Server vNext.

The inclusion of Generic Routing Encapsulation (GRE) in the Windows Server networking stack opens up some interesting cloud scenarios. GRE is a lightweight tunneling protocol that can transport IPv4 and IPv6 (in Microsoft’s implementation), is compatible with BGP for routing, and supports multi tenancy. The protocol lets you easily connect a virtual tenant network at a hosted/public cloud to an on-premises physical network and can integrate with VLAN-based isolation. It is also cloud friendly as it allows a host to create virtual networks and add subnets to externally facing networks without altering the configuration of their physical switches. Currently, the only way this functionality can be accessed is through new parameters for the Add-VpnS2SInterface cmdlet.

This interesting part of Windows Server, first introduced in 2012, is gaining functionality to make it more cloud friendly and scalable in this new version. These enhancements are not in the Technical Preview, so we only have this description and this presentation from TechEd Europe to go on. The big change is with DNS management. IPAM was already pretty good at managing DHCP, but this version lets you manage multiple DNS servers (both file-based and AD-integrated) as well as assign role-based access control (RBAC) permissions down to the individual resource record if necessary. In this release, you can also see all the records associated with a single IP address, something even the ordinary DNS console doesn’t let you do. Comprehensive auditing will also be offered to show changes to records over time, along with who implemented the change.

Two things are clear to me in researching these new features. First, this release of the Technical Preview is very early code, much earlier than the normal public betas, with several speakers at TechEd Europe resorting to slides only and pointing out that “we’re working on this” and “this code isn’t in there yet.” Second, when it comes to networking, Microsoft is very serious about bringing the technology it has developed for Azure, which has never been seen before, to servers. This will pave the way for hybrid cloud functionality, as well as provide more flexibility in modern private cloud datacenters.

[…] The third part in my series on what’s new in Windows Server “10” Tech Preview is now live on 4sysops; this time I cover networking, looking at the improvements in Network Virtualization, the new Network Controller, as well as other changes. Read it here. […]

In order to provide industry-standard compliance with the SWIFT 2017 Standards MT release 2017, Microsoft is offering, to customer's with Software Assurance, updates to the flat-file (MT) messaging schemas used with the Microsoft BizTalk Accelerator for SWIFT. The A4SWIFT Message Pack 2017 contains the following: Re-packaging of all SWIFT FIN message types and business rules...

Independent rendering allows the browser to selectively offload graphics processing to an additional CPU thread, so they can be rendered with minimal impact to the user interface thread and the overall visible performance characteristics page, such as silk-smooth scrolling, responsive interactions, and fluid animations. This technique was pioneered in Internet Explorer 11, and is key

Azure Service Bus .NET Standard client is generally available. With it comes support for .NET Core and the .NET framework. And as mentioned in an earlier post it also supports Mono/Xamarin for cross-platform application development. This is only the start of greater things to come.

The Azure Service Bus team is extremely excited to announce general availability of our Java client library version 1.0.0. It allows customers to enjoy a solid Java experience with Azure Service Bus as it comes complete with native functionality.