Receive Segment Coalescing expands to Hyper-V virtual switches

Receive Segment Coalescing (RSC) is not a new idea. Microsoft introduced it in Windows Server 2012 to reduce the networking workload on the server CPU. Each network packet produces a processor interrupt, which requires the processor to stop its work and attend to the network traffic. In traditional physical systems, this network load was relatively slight, but with the advent of virtualization and the broad acceptance of VMs and containers — each of which demands network support — the burden on the network and processor can be far more pronounced.

Coalescing mitigates the network’s load on the processor by offloading packet handling to the network interface card and enabling the NIC to combine incoming traffic into fewer — but larger — packets. This reduces processor interrupts and eases the network load on the processor. With RSC, it takes less processor work to handle more network traffic.

The problem with older RSC technology was it did not extend to Hyper-V virtual switches. But Microsoft developed Windows Server 2019 SDN features to fix this limitation and enable virtual workloads to take advantage of RSC when connected to virtual switches, which benefits software-defined networks that rely on virtual components, such as virtual switch (vSwitch) elements. RSC in the vSwitch is typically available without adding any specialized hardware or software components.

Dynamic Virtual Machine Multi-Queue spreads the CPU workload

The next Windows Server 2019 SDN technology is Dynamic Virtual Machine Multi-Queue (d.VMMQ), which improves the way processors distribute network load. As networks offer increased bandwidth, the processing required to support network traffic could potentially exceed the capabilities of a single processor. Previous technologies, such as Virtual Machine Queue and Virtual Machine Multi-Queue addressed this issue by enabling network traffic to be distributed among multiple processors in the system.

The problem is that these technologies also impose complexities in planning, monitoring and tuning to distribute network traffic properly. The d.VMMQ feature overcomes this challenge by automatically tuning the system on the fly to spread network overhead among processors in the most efficient way for each VM. For example, when network traffic is low, as few as one processor can handle all of the network traffic. As network traffic increases, d.VMMQ can expand to distribute network traffic among multiple processors. And as the network traffic falls off again, d.VMMQ can reduce the number of processors involved in handling the network load again.

Google open sourced GPipe, a scalable machine learning library designed to enable users to train large-scale deep neural networks faster, more accurately, and potentially with less compute power.

The tech vendor made the library available on GitHub March 4, open sourced under the Lingo framework, a TensorFlow-based deep learning framework designed specifically for linguistic sequence models.

The move is part of a trend in which big tech vendors, including Google, AWS and Facebook, are pushing out open source AI development tools, even as they move to protect and monetize others. To some, it”s a tactic for further influencing the AI field; for others, it”s simply a way of making AI more accessible.

In a related development on March 6, Google’s TensorFlow team said it had open sourced the TensorFlow Privacy tool.

Meanwhile, as GPipe enables users to create more accurate deep learning models, “making this available in open source will essentially allow anyone to harness the power of distributed machine learning to achieve higher accuracy in models,” Gualtieri said.

Teaching a neural network

Created by Google AI, the tech giant’s AI research and development branch, GPipe essentially partitions models across different GPU and TPU accelerators, but in such a way as to enable accelerators to operate in parallel.

GPipe splits training examples into “mini-batches” to determine model error, and then into even smaller “micro-batches,” according to a late 2018 Google AI research paper. Different accelerators can run different micro-batches at once, and gradients are “consistently accumulated across micro-batches,” Google said.

“There is also an implication for auto-ML because GPipe can be used to automate model building to make data scientists more productive and even make citizen data scientists capable of producing business-ready models,” Gualtieri said.

Open source trend

Over the last several years, Google’s AI researchers have made publicly available numerous projects, including data sets, code and software. One of the company’s most important public contributions was TensorFlow, a software library for AI and machine learning.

Originally developed for in-house use by Google Brain, the company’s deep learning team, TensorFlow was open sourced at the end of 2015. TensorFlow experience is now a necessity for those involved in machine learning. Google recently released a new version of the library that supports JavaScript, which has already proved to be popular with developers.

TensorFlow Privacy

Now available on GitHub, TensorFlow Privacy uses techniques based on the concept of differential privacy — that AI models can use, but not memorize, private information.

It’s not a new concept — it is already being used in many AI-based products and services — but with TensorFlow Privacy developers will be able to more easily ensure the privacy and security of their data participants, Google said.

Google sells a number of AI products as well, including its various Cloud AI tools and services, such as Cloud Text-to-Speech and Cloud Speech-to-Text, both of which recently saw sizeable updates.

Computer is in perfect condition and has barely been used. It has been reset and fully updated on Windows 10 (as at 06/02/19).

The protective covering is still fixed to the clear side panel. The computer comes with original box, packaging, power cable and invoice.

Screen, mouse and keyboard not included.

Price and currency: 400Delivery: Goods must be exchanged in personPayment method: Cash or transfer prior to taking itemLocation: BristolAdvertised elsewhere?: Advertised elsewherePrefer goods collected?: I prefer the goods to be collected

______________________________________________________This message is automatically inserted in all classifieds forum threads.By replying to this thread you agree to abide by the trading rules detailed here.Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

Landline telephone number. Make a call to check out the area code and number are correct, too

Name and address including postcode

Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Today, we’re excited to announce that we are open sourcing Windows Calculator on GitHub under the MIT License. This includes the source code, build system, unit tests, and product roadmap. Our goal is to build an even better user experience in partnership with the community. We are encouraging your fresh perspectives and increased participation to help define the future of Calculator.

As developers, if you would like to know how different parts of the Calculator app work, easily integrate Calculator logic or UI into your own applications, or contribute directly to something that ships in Windows, now you can. Calculator will continue to go through all usual testing, compliance, security, quality processes, and Insider flighting, just as we do for our other applications. You can learn more about these details in our documentation on GitHub.

In addition to reusing and adapting the code in your own apps, anyone can participate in the development of Windows Calculator. Getting involved is simple. The project is “clone-and-go” and development will follow the standard GitHub flow. There are many ways for developers at all stages to contribute:

Participate in discussions
Report or fix issues
Suggest new feature ideas
Prototype new features
Design and build together with our engineers

Reviewing the Calculator code is a great way to learn about the latest Microsoft technologies like the Universal Windows Platform, XAML, and Azure Pipelines. Through this project, developers can learn from Microsoft’s full development lifecycle, as well as reuse the code to build their own experiences. It’s also a great example of Fluent app design. To make this even easier, we will be contributing custom controls and API extensions that we use in Calculator and other apps, to projects like the Windows Community Toolkit and the Windows UI Library.
We are happy to welcome all of you to the Windows Calculator team! To get started, check out the Windows Calculator project on GitHub.
Updated March 6, 2019 3:33 pm