One of our favorite holidays is creeping up on us: St. Patrick's Day! What's better than an open excuses to skip work and down some beers? How about getting a boat load of free PC hardware as well?!?

That's right, PC Perspective and Hardware Canucks have teamed up with sponsors NVIDIA, EVGA, ASUS, Crucial and Phanteks to bring our readers and YouTube subscribers a a mega-epic prize pack you are going to have to see to believe!

Here is the total list:

Grand Prize

ASUS ROG Swift G-Sync Monitor

EVGA GTX 980 ACX 2.0

ASUS Maximus VII Hero

ASUS Strix Claw Mouse

ASUS Strix Tactic Pro KB

ASUS Strix Pro Headset

Crucial MX200 1TB SSD

Phanteks Enthoo Luxe Case

Phanteks PH-TC14S

2 x Phanteks PH-F140MP

2 x Phanteks PH-F140SP

2nd Prize

EVGA GTX 960 ACX 2.0

ASUS Maximus VII Hero

ASUS Gladius Mouse

Crucial BX100 1TB SSD

Phanteks PH-TC12LS

2 x Phanteks PH-F140MP

3rd Prize

EVGA GTX 960 ACX 2.0

ASUS Gladius Mouse

We are hosting this contest on our YouTube channels, so here are the rules for entry:

Intel dealt a blow to AMD and ARM this week with the introduction of the Xeon Processor D Product Family of low power server SoCs. The new Xeon D chips use Intel’s latest 14nm process and top out at 45W. The chips are aimed at low power high density servers for general web hosting, storage clusters, web caches, and networking hardware.

Currently, Intel has announced two Xeon D chips, the Xeon D-1540 and Xeon D-1520. Both chips are comprised of two dies inside a single package. The main die uses a 14nm process and holds the CPU cores, L3 cache, DDR3 and DDR4 memory controllers, networking controller, PCI-E 3.0, and USB 3.0 while a secondary die using a larger (but easier to implement) manufacturing process hosts the higher latency I/O that would traditionally sit on the southbridge including SATA, PCI-E 2.0, and USB 2.0.

In all, a fairly typical SoC setup from Intel. The specifics are where things get interesting, however. At the top end, Xeon D offers eight Broadwell-based CPU cores (with Hyper-Threading for 16 total threads) clocked at 2.0 GHz base and 2.5 GHz max all-core Turbo (2.6 GHz on a single core). The cores are slightly more efficient than Haswell, especially in this low power setup. The eight cores can tap into 12MB of L3 cache as well as up to 128GB of registered ECC memory (or 64GB unbuffered and/or SODIMMs) in DDR3 1600 MHz or DDR4 2133 MHz flavors. Xeon D also features 24 PCI-E 3.0 lanes (which can be broken up to as small as six PCI-E 3.0 x4 lanes or in a x16+x8 configuration among others), eight PCI-E 2.0 lanes, two 10GbE connections, six SATA III 6.0 Gbps channels, four USB 3.0 ports, and four USB 2.0 ports.

All of this hardware is rolled into a part with a 45W TDP. Needless to say, this is a new level of efficiency for Xeons! Intel chose to compare the new chips to its Atom C2000 “Avoton” (Silvermont-based) SoCs which were also aimed at low power servers and related devices. According to the company, Xeon D offers up to 3.4-times the performance and 1.7-times the performance-per-watt of the top end Atom C2750 processor. Keeping in mind that Xeon D uses approximately twice the power as Atom C2000, it is still looking good for Intel since you are getting more than twice the performance and a more power efficient part. Further, while the TDPs are much higher,

Intel has packed Xeon D with a slew of power management technology including Integrated Voltage Regulation (IVR), an energy efficient turbo mode that will analyze whether increased frequencies actually help get work done faster (and if not will reduce turbo to allow extra power to be used elsewhere on the chip or to simply reduce wasted energy), and optional “hardware power management” that allows the processor itself to determine the appropriate power and sleep states independently from the OS.

Ars Technica notes that Xeon D is strictly single socket and that Intel has reserved multi-socket servers for its higher end and more expensive Xeons (Haswell-EP). Where does the “high density” I mentioned come from then? Well, by cramming as many Xeon D SoCs on small motherboards with their own RAM and IO into rack mounted cases as possible, of course! It is hard to say just how many Xeon Ds will fit in a 1U, 2U, or even 4U rack mounted system without seeing associated motherboards and networking hardware needed but Xeon D should fare better than Avoton in this case since we are looking at higher bandwidth networking links and more PCI-E lanes, but AMD with SeaMicro’s Freedom Fabric and head start on low power x86 and ARM-based Opteron chip research as well as other ARM-based companies like AppliedMicro (X-Gene) will have a slight density advantage (though the Intel chips will be faster per chip).

Which brings me to my final point. Xeon D truly appears like a shot across both ARM and AMD’s bow. It seems like Intel is not content with it’s dominant position in the overall server market and is putting its weight into a move to take over the low power server market as well, a niche that ARM and AMD in particular have been actively pursuing. Intel is not quite to the low power levels that AMD and other ARM-based companies are, but bringing Xeon down to 45W (with Atom-based solutions going upwards performance wise), the Intel juggernaut is closing in and I’m interested to see how it all plays out.

Right now, ARM still has the TDP and customization advantage (where customers can create custom chips and cores to suit their exact needs) and AMD will be able to leverage its GPU expertise by including processor graphics for a leg up on highly multi-threaded GPGPU workloads. On the other hand, Intel has the better manufacturing process and engineering budget. Xeon D seems to be the first step towards going after a market that they have in the past not really focused on.

With Intel pushing its weight around, where will that leave the little guys that I have been rooting for in this low power high density server space?

Project Lead: Joris-Jan van ‘t Land

Thanks to Ian Comings, guest writer from the PC Perspective Forums who conducted the interview of Bohemia Interactive's Joris-Jan van ‘t Land. If you are interested in learning more about ArmA 3 and hanging out with some PC gamers to play it, check out the PC Perspective Gaming Forum!

I recently got the chance to send some questions to Bohemia Interactive, a computer game development company based out of Prague, Czech Republic, and a member of IDEA Games. Bohemia Interactive was founded in 1999 by CEO Marek Španěl, and it is best known for PC gaming gems like Operation Flashpoint: Cold War Crisis, The ArmA series, Take On Helicopters, and DayZ. The questions are answered by ArmA 3's Project Lead: Joris-Jan van ‘t Land.

PC Perspective: How long have you been at Bohemia Interactive?

VAN ‘T LAND: All in all, about 14 years now.

PC Perspective: What inspired you to become a Project Lead at Bohemia Interactive?

VAN ‘T LAND: During high school, it was pretty clear to me that I wanted to work in game development, and just before graduation, a friend and I saw a first preview for Operation Flashpoint: Cold War Crisis in a magazine. It immediately looked amazing to us; we were drawn to the freedom and diversity it promised and the military theme. After helping run a fan website (Operation Flashpoint Network) for a while, I started to assist with part-time external design work on the game (scripting and scenario editing). From that point, I basically grew naturally into this role at Bohemia Interactive.

PC Perspective: What part of working at Bohemia Interactive do you find most satisfying? What do you find most challenging?

VAN ‘T LAND: The amount of freedom and autonomy is very satisfying. If you can demonstrate skills in some area, you're welcome to come up with random ideas and roll with them. Some of those ideas can result in official releases, such as Arma 3 Zeus. Another rewarding aspect is the near real-time connection to those people who are playing the game. Our daily Dev-Branch release means the work I do on Monday is live on Tuesday. Our own ambitions, on the other hand, can sometimes result in some challenges. We want to do a lot and incorporate every aspect of combat in Arma, but we're still a relatively small team. This can mean we bite off more than we can deliver at an acceptable level of quality.

PC Perspective: What are some of the problems that have plagued your team, and how have they been overcome?

VAN ‘T LAND: One key problem for us was that we had no real experience with developing a game in more than one physical location. For Arma 3, our team was split over two main offices, which caused quite a few headaches in terms of communication and data synchronization. We've since had more key team members travel between the offices more frequently and improved our various virtual communication methods. A lot of work has been done to try to ensure that both offices have the latest version of the game at any given time. That is not always easy when your bandwidth is limited and games are getting bigger and bigger.

I know, the nerve of some people. Jacob from EVGA emails me this week, complaining about how he has this graphics card and motherboard just sitting in his cubicle taking up space and "why won't I just give it away already!?"

Fine. I'll do it. For science.

So let's make this simple shall we? EVGA wants to get rid of some kick-ass gaming hardware and you want to win it. Why muddle up a good thing?

The EVGA GeForce GTX 960 delivers incredible performance, power efficiency, and gaming technologies that only NVIDIA Maxwell technology can offer. This is the perfect upgrade, offering 60% faster performance and twice the power efficiency of previous-generation cards*. Plus, it features VXGI for realistic lighting, support for smooth, tear-free NVIDIA G-SYNC technology, and Dynamic Super Resolution for 4K-quality gaming on 1080P displays.

The new EVGA ACX 2.0+ cooler brings new features to the award winning EVGA ACX 2.0 cooling technology. A Memory MOSFET Cooling Plate (MMCP) reduces MOSFET temperatures up to 11°C, and optimized Straight Heat Pipes (SHP) reduce GPU temperature by an additional 5°C. ACX 2.0+ coolers also feature optimized Swept fan blades, double ball bearings and an extreme low power motor, delivering more air flow with less power, unlocking additional power for the GPU.

Welcome to a new class of high performance motherboards with the EVGA Z97 lineup. These platforms offer a return to greatness with a new GUI BIOS interface, reimagined power VRM that focuses on efficiency, and are loaded with features such as Intel® Gigabit LAN, Native SATA 6G/USB 3.0 and more.

Engineered for the performance users with excellent overclocking features. Includes a GUI BIOS that is focused on functionality, new software interface for overclocking in the O.S., high quality components, M.2 storage option and more.

The Process (aka how do you win?)

So even though I'm doing all the work getting this hardware out of Jacob's busy hands and to our readers...you do have to do a couple of things to win the hardware as well.

The contest will run for one week so you will have more than enough time to listen to or watch the podcast and get the super-secret answer. We'll ship to anywhere in the world and one person will win both fantastic prizes! Once the contest closes (Wednesday, February 25th at 12pm ET) we'll randomly draw a winner from the form below that got the correct answer!

A HUGE thanks goes to our friends at EVGA for supplying the hardware for our giveaway. Good luck!

Overview

We’ve been tracking NVIDIA’s G-Sync for quite a while now. The comments section on Ryan’s initial article erupted with questions, and many of those were answered in a follow-on interview with NVIDIA’s Tom Petersen. The idea was radical – do away with the traditional fixed refresh rate and only send a new frame to the display when it has just completed rendering by the GPU. There are many benefits here, but the short version is that you get the low-latency benefit of V-SYNC OFF gaming combined with the image quality (lack of tearing) that you would see if V-SYNC was ON. Despite the many benefits, there are some potential disadvantages that come from attempting to drive an LCD panel at varying periods of time, as opposed to the fixed intervals that have been the norm for over a decade.

As the first round of samples came to us for review, the current leader appeared to be the ASUS ROG Swift. A G-Sync 144 Hz display at 1440P was sure to appeal to gamers who wanted faster response than the 4K 60 Hz G-Sync alternative was capable of. Due to what seemed to be large consumer demand, it has taken some time to get these panels into the hands of consumers. As our Storage Editor, I decided it was time to upgrade my home system, placed a pre-order, and waited with anticipation of finally being able to shift from my trusty Dell 3007WFP-HC to a large panel that can handle >2x the FPS.

Fast forward to last week. My pair of ROG Swifts arrived, and some other folks I knew had also received theirs. Before I could set mine up and get some quality gaming time in, my bro FifthDread and his wife both noted a very obvious flicker on their Swifts within the first few minutes of hooking them up. They reported the flicker during game loading screens and mid-game during background content loading occurring in some RTS titles. Prior to hearing from them, the most I had seen were some conflicting and contradictory reports on various forums (not limed to the Swift, though that is the earliest panel and would therefore see the majority of early reports), but now we had something more solid to go on. That night I fired up my own Swift and immediately got to doing what I do best – trying to break things. We have reproduced the issue and intend to demonstrate it in a measurable way, mostly to put some actual data out there to go along with those trying to describe something that is borderline perceptible for mere fractions of a second.

First a bit of misnomer correction / foundation laying:

The ‘Screen refresh rate’ option you see in Windows Display Properties is actually a carryover from the CRT days. In terms of an LCD, it is the maximum rate at which a frame is output to the display. It is not representative of the frequency at which the LCD panel itself is refreshed by the display logic.

LCD panel pixels are periodically updated by a scan, typically from top to bottom. Newer / higher quality panels repeat this process at a rate higher than 60 Hz in order to reduce the ‘rolling shutter’ effect seen when panning scenes or windows across the screen.

In order to engineer faster responding pixels, manufacturers must deal with the side effect of faster pixel decay between refreshes. This is a balanced by increasing the frequency of scanning out to the panel.

The effect we are going to cover here has nothing to do with motion blur, LightBoost, backlight PWM, LightBoost combined with G-Sync (not currently a thing, even though Blur Busters has theorized on how it could work, their method would not work with how G-Sync is actually implemented today).

With all of that out of the way, let’s tackle what folks out there may be seeing on their own variable refresh rate displays. Based on our testing so far, the flicker only presented at times when a game enters a 'stalled' state. These are periods where you would see a split-second freeze in the action, like during a background level load during game play in some titles. It also appears during some game level load screens, but as those are normally static scenes, they would have gone unnoticed on fixed refresh rate panels. Since we were absolutely able to see that something was happening, we wanted to be able to catch it in the act and measure it, so we rooted around the lab and put together some gear to do so. It’s not a perfect solution by any means, but we only needed to observe differences between the smooth gaming and the ‘stalled state’ where the flicker was readily observable. Once the solder dust settled, we fired up a game that we knew could instantaneously swing from a high FPS (144) to a stalled state (0 FPS) and back again. As it turns out, EVE Online does this exact thing while taking an in-game screen shot, so we used that for our initial testing. Here’s what the brightness of a small segment of the ROG Swift does during this very event:

Measured panel section brightness over time during a 'stall' event. Click to enlarge.

The relatively small ripple to the left and right of center demonstrate the panel output at just under 144 FPS. Panel redraw is in sync with the frames coming from the GPU at this rate. The center section, however, represents what takes place when the input from the GPU suddenly drops to zero. In the above case, the game briefly stalled, then resumed a few frames at 144, then stalled again for a much longer period of time. Completely stopping the panel refresh would result in all TN pixels bleeding towards white, so G-Sync has a built-in failsafe to prevent this by forcing a redraw every ~33 msec. What you are seeing are the pixels intermittently bleeding towards white and periodically being pulled back down to the appropriate brightness by a scan. The low latency panel used in the ROG Swift does this all of the time, but it is less noticeable at 144, as you can see on the left and right edges of the graph. An additional thing that’s happening here is an apparent rise in average brightness during the event. We are still researching the cause of this on our end, but this brightness increase certainly helps to draw attention to the flicker event, making it even more perceptible to those who might have not otherwise noticed it.

Some of you might be wondering why this same effect is not seen when a game drops to 30 FPS (or even lower) during the course of normal game play. While the original G-Sync upgrade kit implementation simply waited until 33 msec had passed until forcing an additional redraw, this introduced judder from 25-30 FPS. Based on our observations and testing, it appears that NVIDIA has corrected this in the retail G-Sync panels with an algorithm that intelligently re-scans at even multiples of the input frame rate in order to keep the redraw rate relatively high, and therefore keeping flicker imperceptible – even at very low continuous frame rates.

A few final points before we go:

This is not limited to the ROG Swift. All variable refresh panels we have tested (including 4K) see this effect to a more or less degree than reported here. Again, this only occurs when games instantaneously drop to 0 FPS, and not when those games dip into low frame rates in a continuous fashion.

The effect is less perceptible (both visually and with recorded data) at lower maximum refresh rate settings.

The effect is not present at fixed refresh rates (G-Sync disabled or with non G-Sync panels).

This post was primarily meant as a status update and to serve as something for G-Sync users to point to when attempting to explain the flicker they are perceiving. We will continue researching, collecting data, and coordinating with NVIDIA on this issue, and will report back once we have more to discuss.

During the research and drafting of this piece, we reached out to and worked with NVIDIA to discuss this issue. Here is their statement:

"All LCD pixel values relax after refreshing. As a result, the brightness value that is set during the LCD’s scanline update slowly relaxes until the next refresh.

This means all LCDs have some slight variation in brightness. In this case, lower frequency refreshes will appear slightly brighter than high frequency refreshes by 1 – 2%.

When games are running normally (i.e., not waiting at a load screen, nor a screen capture) - users will never see this slight variation in brightness value. In the rare cases where frame rates can plummet to very low levels, there is a very slight brightness variation (barely perceptible to the human eye), which disappears when normal operation resumes."

So there you have it. It's basically down to the physics of how an LCD panel works at varying refresh rates. While I agree that it is a rare occurrence, there are some games that present this scenario more frequently (and noticeably) than others. If you've noticed this effect in some games more than others, let us know in the comments section below.

(Editor's Note: We are continuing to work with NVIDIA on this issue and hope to find a way to alleviate the flickering with either a hardware or software change in the future.)

It has become increasingly apparent that flash memory die shrinks have hit a bit of a brick wall in recent years. The issues faced by the standard 2D Planar NAND process were apparent very early on. This was no real secret - here's a slide seen at the 2009 Flash Memory Summit:

Despite this, most flash manufacturers pushed the envelope as far as they could within the limits of 2D process technology, balancing shrinks with reliability and performance. One of the largest flash manufacturers was Intel, having joined forces with Micron in a joint venture dubbed IMFT (Intel Micron Flash Technologies). Intel remained in lock-step with Micron all the way up to 20nm, but chose to hold back at the 16nm step, presumably in order to shift full focus towards alternative flash technologies. This was essentially confirmed late last week, with Intel's announcement of a shift to 3D NAND production.

Intel's press briefing seemed to focus more on cost efficiency than performance, and after reviewing the very few specs they released about this new flash, I believe we can do some theorizing as to the potential performance of this new flash memory. From the above illustration, you can see that Intel has chosen to go with the same sort of 3D technology used by Samsung - a 32 layer vertical stack of flash cells. This requires the use of an older / larger process technology, as it is too difficult to etch these holes at a 2x nm size. What keeps the die size reasonable is the fact that you get a 32x increase in bit density. Going off of a rough approximation from the above photo, imagine that 50nm die (8 Gbit), but with 32 vertical NAND layers. That would yield a 256 Gbit (32 GB) die within roughly the same footprint.

It's likely a safe bet that IMFT flash will be going for a cost/GB far cheaper than the competing Samsung VNAND, and going with a relatively large 256 Gbit (vs. VNAND's 86 Gbit) per-die capacity is a smart move there, but let's not forget that there is a catch - write speed. Most NAND is very fast on reads, but limited on writes. Shifting from 2D to 3D NAND netted Samsung a 2x speed boost per die, and another effective 1.5x speed boost due to their choice to reduce per-die capacity from 128 Gbit to 86 Gbit. This effective speed boost came from the fact that a given VNAND SSD has 50% more dies to reach the same capacity as an SSD using 128 Gbit dies.

Now let's examine how Intel's choice of a 256 Gbit die impacts performance:

Intel SSD 730 240GB = 16x128 Gbit 20nm dies

270 MB/sec writes and ~17 MB/sec/die

Crucial MX100 128GB = 8x128Gbit 16nm dies

150 MB/sec writes and ~19 MB/sec/die

Samsung 850 Pro 128GB = 12x86Gbit VNAND dies

470MB/sec writes and ~40 MB/sec/die

If we do some extrapolation based on the assumption that IMFT's move to 3D will net the same ~2x write speed improvement seen by Samsung, combined with their die capacity choice of 256Gbit, we get this:

Future IMFT 128GB SSD = 4x256Gbit 3D dies

40 MB/sec/die x 4 dies = 160MB/sec

Even rounding up to 40 MB/sec/die, we can see that also doubling the die capacity effectively negates the performance improvement. While the IMFT flash equipped SSD will very likely be a lower cost product, it will (theoretically) see the same write speed limits seen in today's SSDs equipped with IMFT planar NAND. Now let's go one layer deeper on theoretical products and assume that Intel took the 18-channel NVMe controller from their P3700 Series and adopted it to a consumer PCIe SSD using this new 3D NAND. The larger die size limits the minimum capacity you can attain and still fully utilize their 18 channel controller, so with one die per channel, you end up with this product:

Theoretical 18 channel IMFT PCIE 3D NAND SSD = 18x256Gbit 3D dies

40 MB/sec/die x 18 dies = 720 MB/sec

18x32GB (die capacity) = 576GB total capacity

​​Overprovisioning decisions aside, the above would be the lowest capacity product that could fully utilize the Intel PCIe controller. While the write performance is on the low side by PCIe SSD standards, the cost of such a product could easily be in the $0.50/GB range, or even less.

In summary, while we don't have any solid performance data, it appears that Intel's new 3D NAND is not likely to lead to a performance breakthrough in SSD speeds, but their choice on a more cost-effective per-die capacity for their new 3D NAND is likely to give them significant margins and the wiggle room to offer SSDs at a far lower cost/GB than we've seen in recent years. This may be the step that was needed to push SSD costs into a range that can truly compete with HDD technology.

Last year around this time I reviewed my first bottle of Wyoming Whiskey. Overall, I was quite pleased with how this particular spirit has come along. You can read my entire review here. It also includes a little interview with one of the co-founders of Wyoming Whiskey, David Defazio. The landscape has changed a little throughout the past year, and the distillery has recently released a second product in limited quantities to the Wyoming market. The Single Barrel Bourbon selections come from carefully selected barrels and are not blended with others. I had the chance to chat with David again recently and received some interesting information from him about the latest product and where the company is headed.

Picture courtesy of Wyoming Whiskey

Noticed that you have a new single barrel product on the shelves. How would you characterize this as compared to the standard bottle you sell?

These very few barrels are selected from many and only make the cut if they meet very high standards. We have only bottled 4 so far. And, the State has sold out. All of our product has matured meaningfully since last year and these barrels have benefitted the most as evidenced by their balance and depth of character. The finish is wickedly smooth. I have not heard one negative remark about the Single Barrel Product.

Have you been able to slowly lengthen out the time that the bourbon matures til it is bottled, or is it around the same age as what I sampled last year?

Yes, these barrels are five years old, as is the majority of our small batch product.

How has been the transition from Steve to Elizabeth as the master distiller?

Elizabeth is no longer with us. She had intended to train under Steve for the year, but when his family drew him back to Kentucky in February, this plan disintegrated. So, our crew is making bourbon under the direction of Sam Mead, my partners' son, who is our production manager. He has already applied his engineering degree in ways that help increase quality and production. And he's just getting started.

What other new products may be showing up in the next year?

You may see a barrel-strength bourbon from us. There are a couple of honey barrels that we are setting aside for this purpose.

Wyoming Whiskey had originally hired on Steve Nally of Maker’s Mark fame, somehow pulling him out of retirement. He was the master distiller for quite a few years, and had moved on from the company this past year. He is now heading up a group that is opening a new distillery in Kentucky that is hoping to break into the bourbon market. They expect their first products to be aged around 7 years. As we all know, it is hard to keep afloat as a company if they are not selling product. In the meantime, it looks like this group will do what so many other “craft” distillers have been caught doing, and that is selling bourbon that is produced from mega-factories that is then labeled as their own.

Bourbon has had quite the renaissance in the past few years with the popularity of the spirit soaring. People go crazy trying to find limited edition products like Pappy Van Winkle and many estimate that overall bourbon production in the United States will not catch up to demand anytime soon. This of course leads to higher prices and tighter supply for the most popular of brands.

It is good to see that Wyoming Whiskey is lengthening out the age of the barrels that they are bottling, as it can only lead to smoother and more refined bourbon. From most of my tasting, it seems that 6 to 7 years is about optimal for most bourbon. There are other processes that can speed up these results, and I have tasted batches that are only 18 months old and rival that of much older products. I look forward to hearing more about what Wyo Whiskey is doing to improve their product.

UPDATE 2: You missed the fun for the second time? That's unfortunate, but you can relive the fun with the replay right here!

I'm sure like the staff at PC Perspective, many of our readers have been obsessively playing the Borderlands games since the first release in 2009. Borderlands 2 arrived in 2012 and once again took hold of the PC gaming mindset. This week marks the release of Borderlands: The Pre-Sequel, which as the name suggests, takes place before the events of Borderlands 2. The Pre-Sequel has playable characters that were previously only known to the gamer as NPCs and that, coupled with the new low-gravity game play style, should entice nearly everyone that loves the first-person, loot-driven series to come back.

To celebrate the release, PC Perspective has partnered with NVIDIA to host a couple of live game streams that will feature some multi-player gaming fun as well some prizes to giveaway to the community. I will be joined once again by NVIDIA's Andrew Coonrad and Kris Rey to tackle the campaign in a cooperative style while taking a couple of stops to give away some hardware.

Holy crap, that's a hell of a list!! How do you win? It's really simple: just tune in and watch the Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA! We'll explain the methods to enter live on the air and anyone can enter from anywhere in the world - no issues at all!

So stop by Tuesday night for some fun, some gaming and the chance to win some hardware!

I was not planning to report on Apple's announcement but, well, this just struck me as odd.

So Apple has relaunched the Mac Mini with fourth-generation Intel Core processors, after two years of waiting. It is the same height as the Intel NUC, but it also almost twice the length and twice the width (Apple's 20cm x 20cm versus the NUC's ~11cm x 11cm when the case is included). So, after waiting through the entire Haswell architecture launch cycle, right up until the imminent release of Broadwell, they are going with the soon-to-be outdated architecture, to update their two-year-old platform?

((Note: The editorial originally said "two-year-old architecture". I thought that Haswell launched about six months earlier than it did. The mistake was corrected.))

I wonder if, following the iTunes U2 deal, this device will come bundled with Limp Bizkit's "Nookie"...

The price has been reduced to $499, which is a welcome $100 price reduction especially for PC developers who want a Mac to test cross-platform applications on. It also has Thunderbolt 2. These are welcome additions. I just have two, related questions: why today and why Haswell?

The new Mac Mini started shipping yesterday. 15-watt Broadwell-U is expected to launch at CES in January with 28W parts anticipated a few months later, for the following quarter.