While the newest entry on the official Windows 8 developers blog site doesn't have much to do about Windows 8 itself, it does have an interesting article, written by several members of the Internet Explorer team, that examines how Microsoft tests web features in their own Performance Lab.

The lab itself measures Internet Explorer 200 times a day, according to the blog with a total of over 5.7 million measurements per day. Overall, a whopping 480 GB of runtime data per day is generated in the lab. It also has a private network of 140 machines that simulates what the real internet is like with web and DNS servers, routers and more.

The lab is also designed to handle almost any type of PC setup, including desktops, laptops, tablets and more. It can examine machines that have x86, ARM and x64-based processors. The lab can run tests using different types of web browsers, anti-virus programs and more in an attempt to record the best overall real world web performance statistics.

There are three different kinds of categories in the Internet Explorer Performance Lab. One of them is Network and Server. That includes content servers that can host web-based programs such as Outlook Web Access or Office Web Apps, or pure web servers that have 16 cores and have 16GB of RAM. The lab also has network emulators that can simulate how to connect to the web servers via cable, DSL, wireless 4G and even 56K dialup connections. Finally, the lab's DNS servers link the web server over to the lab's next category of machines, test clients.

There are over 160 different types of PCs in the test client category in the lab, ranging from high end desktops to low powered netbooks, as seen above. The lab makes use of Microsoft's Windows Graphics Lab which has nearly all graphics card and chip options. The lab can change its client PC's graphics hardware to test pretty much any kind of variation in terms of graphical web performance.

Finally, the IE Performance Lab has a number of machines set up for analysis and reporting. There are 11 servers in this lab category, each with 16 cores and 16GB of RAM. It also has a SQL server that stores six million lab measurements each day inside a 24 logical core machine with a whopping 64GB of RAM.

In terms of the tests themselves, the IE Performance Lab has four different categories. One is the measurement of loading web content and another examines performance with interactive web content. The IE browser itself is tested for performance in terms of opening and closing the program, checking out browser features like bookmarks and history and more. Finally, the lab uses benchmark software such as WebKit SunSpider for looking more closely at performance.

When examining how a web browser loads up a web site, the lab looks at a large number of variables such as power consumption, CPU performance and how the browser handles a PC's resources. The blog states:

In total, the Performance Lab measures over 850 different metrics. Each one provides part of the picture of browser performance. To give a feel for what we measure, here’s a (non-exhaustive) list of key metrics: private working set, total working set, HTTP request count, TCP bytes received, number of binaries loaded, number of context switches, DWM video memory usage, percent GPU utilization, number of paints, CPU time in JavaScript garbage collection, CPU time in JavaScript parsing, average DWM update interval, peak total working set, number of heap allocations, size of heap allocations, number of outstanding heap allocations, size of outstanding heap allocations, CPU time in layout subsystem, CPU time in formatting subsystem, CPU time in rendering subsystem, CPU time in HTML parser subsystem, idle CPU time, number of threads.

The testing process itself is also examined on the blog site, including setting up the test client to run on the lab's network. The article states that during a test, if the browser or Windows itself crashes the test run is considered a failure and the lab moves on to the next test run. Each test case is run at least ten times in the lab.

When the testing is complete the lab looks at the results, such as the one above which shows Elapsed Time results while running Bing Maps. The blog states:

The red series shows the median value of each test run, and grey bars show the range. Hovering over a test run will show the iterations for the metric (in blue) as well as a tooltip that provides the exact values for minimum, median, max values, as well as the absolute and relative difference with the previous test run. The tooltip shown in this image also provides additional context like the build being tested, and a quick link to our source control system to view the changes in the build.