Apple’s 2019 iPad features the company’s A10 processor that first debuted in the iPhone 7 at the end of 2016. Although that makes this tablet’s hardware a few years old, you get get the performance of this former flagship with a large high-def screen and with 32GB of storage space at a surprisingly affordable price. Right now you can get one from Amazon marked down from $329.00 to $249.99.

Dell built this gaming desktop with an Intel Core i5-9400 and GeForce GTX 1660 Ti graphics processor. Together, this hardware can run games with high settings at 1080p resolution. The system also has a unique front panel that looks cool and edgy, and with promo code 50OFF699 you can get it now marked down from $929.99 to just $779.99 from Dell.

Working on a 4K monitor has some major advantages, including being able to fit more on-screen at any given time. This display from Dell utilizes a 27-inch 4K panel that also supports 1.07 billion colors, making it well-suited for image editing. Right now you can get one from Dell marked down from $719.99 to $579.99. If you have an Amex card, you can save even more by using promo code STAND4SMALLat checkout to drop the price to $521.99.

If you want to make sure not to get gouged on a hardware purchase, you should always keep your eye on factory refurbished sales from the world’s biggest computer manufacturers. While most models are either returns or overstock, all get a complete once-over and are certified as 100 percent working and ready to roll, often with an extended warranty.

And the biggest selling point: they’re often hundreds or even thousands off their regular retail price.

Check out this selection of some of the best desktop, laptop and tablet sales on factory refurbished items going on right now, as well as some deals on accessories you want to pick up while you’re at it.

Apple

If you need an iPad (or a second one for your kids) and don’t require the latest and hottest version, then the Apple iPad 3 ($199.99; originally $399) or Apple iPad 4 ($219.99; originally $499) at half off or more might be right up your alley. Both are WiFi-enabled, sport a 9.7-inch Retina display screen and come with the full package of accessories, including a charger and case. The iPad 3 has 64GB of storage, while the iPad 4 holds 32GB of space, so judge your drive needs accordingly and load up.

If you’d rather go for something a little more portable, the 3rd Gen iPad Mini ($239.99; originally $399) with a 7.9-inch display is certainly more compact, but without losing any punching power. With WiFi capabilities and a 1.3GHz A7 processor, you’ll be able to stream video, browse photos and do loads of multi-tasking with no lagging and up to 10 hours of battery life. This model also has 64GB storage, more than enough room to keep all those media files and docs on this handy little helper.

But if raw power is what you’re seeking, your best bet might be the iPad Pro ($348.99; originally $599), housing a lightning quick A9X processor with 2GB of RAM to speed along as fast as your fingers can fly. With a built-in 12 MP iSight rear camera and 5 MP front-facing camera, you’ll snap stunning images with crystal clarity, which you can save in the 32GB hard drive. This deal also comes with the full accessories pack, including a charger and case.

Of course, sometimes you need more than a tablet, which is where half off on an Apple MacBook Pro ($999.97; originally $1,999) might best suit your needs. One of the world’s top laptops, the MacBook Pro with a 15.4-inch high-resolution LED-backlit Retina display is powered by a robust quad-core 2.2GHz Intel Core i7 processor that delivers speedy performance. With full WiFi support, a 720p FaceTime HD webcam, and a 99.5W/h Li-Poly battery for smooth operations for up to 9 hours, it’s a winner. And did we mention…it’s half off the regular price?

For those who can’t decide between the utility of a laptop and the mobility of a tablet, the Surface Book splits the difference with its detachable keyboard component. Right now, Microsoft is offering $400 off a Microsoft Surface Book 2 ($1,100.99; originally $1,499.99) or almost $1,400 off another Surface Book model ($1,109; originally $2,499.99), bringing each to virtually the same price. Again, the main key difference is the Intel Core i5 and i7 processors, though each sports many of the same features, including 13.5-inch touchscreens, WiFi and Bluetooth connectivity, and 256GB of storage space.

HP

Of course, even in a world dominated by laptops and tablets, there’s still a home for a trusty reliable desktop computer — and they’re available at some super-low prices. With an AMD A8 7600B dual-core processor and 8GB of RAM under the hood, the HP EliteDesk ($229.99; originally $599.99) delivers fast and lag-free performance. Running on an included Windows 7 Professional OS with a 256GB hard drive, you’ll get full security features, media centricity, networking, and virtual disk support to handle all your personal and business-related tasks.

And since a computer’s utility is pretty tough to experience without a monitor, HP is also offering $10 off the price of their HP 21kd 20.7-inch LED monitor ($89.99; originally $99.99) as well. Tiltable from minus-5 to 20 degrees for more comfort during long work or gaming sessions, this monitor features anti-glare technology, a 1920×1080 resolution, and 6,000,000:1 dynamic contrast ratio for vividly sharp images, no matter what you’re watching.

Accessories

With your new hardware, you can make sure it’s safeguarded from harm with a One Power 6-outlet and 4-USB port tower surge protector ($29.99; originally $40.99). With 1,800-Joule protection and a clean power filter to absorb excess energy and remove harmful waves from electrical currents, this protector shields up to 10 devices from the effects of lightning strikes, transformer malfunctions, power outages and more.

Replictronics is a company that’s bringing the 80s roaring back, starting with the Replitronics Hotline 16000 Power Bank ($34.99; originally $39.99). Sure, it may look like a Madonna-era Walkman, but it’s actually a multi-functional, portable power bank with the ability to charge up to three devices at once via USB-3.0, USB-C, and 10W wireless charging for Qi-enabled devices. Plus — it looks like a Walkman!

If you lost hours in an arcade during the go-go 80s, you’ll appreciate the Replitronics USB Charge Machine ($49.99; originally $59.99). At just 8.5 inches high, this perfect 1/6th scale replica of an iconic arcade change machine also sports 6 USB ports capable of charging up all six of your devices at the same time.

For those times where an outlet is nowhere to be found, the RAVPower 24W 3-Port Solar Charger ($59.99; originally $89.99) has your back. Just unfold this portable solar panel, let it drink in the sun’s rays, then plug in your smartphones, tablets and other devices for premium solar charging power anywhere and anytime you need a boost.

The EVE Bluetooth Transmitter and Receiver ($59.99; originally $69.99) is ideal for connecting any of your Bluetooth audio earphones or headphones to any of your non-Bluetooth enabled devices. Now, you plug in to your Nintendo Switch, Playstation, smartphones, car audio system, and all your other gadgets that aren’t Bluetooth-ready and feed audio right to your headphones for up to 8 hours of wireless play on a single charge.

When you need eyes in hard-to-reach places, the Sinji Flexible Borescope Camera for Android and iOS ($29.95; originally $38.79) is mighty handy. With a 2-meter cable, you can sneak the Sinji’s waterproof, all-seeing eye and six powerful LED lights in to investigate anything from a blocked drain to a defective machine part. With its included app, it’ll even take photos or record video footage and save it directly to your smartphone, tablet or PC.

Finally, the Pictar Video Chat Kit ($94.99; originally $129.99) gives you all the pieces to immediately improve the look of any videos you shoot. With a flexible, durable tripod to position your camera, a compact, powerful, high-quality LED light to bathe your subjects, and an attachable wide-angle smart lens to expand the range of your images, your video calls or conferences will look more like a network TV interview.

Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.

Share this:

NASA’s InSight lander touched down at Elysium Planitia on Mars in late 2018, and it has subsequently made history by taking the first seismic readings on another planet. However, the mission’s burrowing probe was stuck on the surface after the Martian soil proved less receptive than expected. NASA reports its plan of pushing the probe into the surface appears to be working — the instrument is finally below the surface. Its ability to tunnel deeper is unknown, though.

InSight is a stationary lander rather than a rover like Curiosity or Perseverance, but it doesn’t need to go anywhere to do what it went to Mars to do. NASA carefully chose the landing site at Elysium Planitia to perform this important geophysics work. The team deployed InSight’s Seismic Experiment for Interior Structure (SEIS) instrument shortly after landing, allowing the spacecraft to relay data on marsquakes. SEIS just sits on the surface of Mars, so deploying it was straightforward after analyzing the area around the rover. The Heat Flow and Physical Properties Package (HP3) was another story. This “mole” was supposed to dig 16 feet (five meters) into the planet, but it only made it a few inches down before popping back out.

The engineers who designed HP3 had to make some educated guesses about how the Martian soil would behave as it’s so unlike what we have here on Earth. NASA has speculated that the material is so fine that it continuously falls back into the hole each time the probe tries to hammer itself down deeper. After several failed remedies, NASA decided in March to just shove the probe into the surface with the lander’s robotic arm. NASA now says this seems to be working.

Over the last several weeks, the arm has slowly applied pressure to help the probe dig deeper. The operation was more delicate than you might expect. The arm had to be carefully placed for each push so it could apply pressure to the instrument without damaging the tether that connects it to the lander. Now, the HP3 has finally disappeared below the surface.

This is a big step in the right direction, but NASA still doesn’t know if the probe will be able to drag itself deeper. If the same issues persist at this depth, there’s nothing on InSight that can reach down there to help it along. Even if the mole is stuck and can’t move deeper, the team has still learned a lot about Mars that could help future instruments reach greater depths.

Share this:

While scanning the skies, humanity has identified thousands of exoplanets orbiting distant stars. However, very few of them are at all similar to Earth. Now, the Max Planck Institute for Solar System Research in Göttingen reports a newly discovered exoplanet could be a “mirror image” of our own.

We currently lack the technology to directly image exoplanets, so we can only infer their presence via two methods. Astronomers either look for small wobbles in a star’s rotation caused by the gravity of planets or drops in brightness from our perspective on Earth, which indicates a planet has transited the star. Kepler used the latter method to identify more than 2,600 exoplanets, and that number will probably continue to rise. Teams like the one from the Max Planck Institute are still combing through the luminance data gathered by Kepler to uncover new exoplanets. That’s how they found the very Earth-like candidate exoplanet KOI-456.04.

If it exists, KOI-456.04 orbits a sun-like star called Kepler-160 about 3,000 light-years away from Earth. Previous analysis of Kepler-160 revealed two large exoplanets — these gas giants are much easier to spot in the background noise, so many of the worlds we’ve discovered are very unlike Earth. One of those planets, Kepler-160c, showed small perturbations in its orbit that could indicate another planet, so the Max Planck Institute set out to find it.

Using the original Kepler data, the team developed a new physical model of stellar brightness variation. This algorithm identified a probable exoplanet much smaller than the other known planets orbiting Kepler-160. That’s KOI-456.04, which doesn’t have an official Kepler designation for now because it still needs to be verified. The team estimates an 85 percent chance that KOI-456.04 is really there.

KOI-456.04 is an important find because it and its host star mirror our solar system. Kepler-160 is a yellow dwarf star like the sun, whereas many stars hosting exoplanets are the much more common red dwarf type. At 1.9 times the size of Earth, it’s very likely that KOI-456.04 is a rocky planet like ours. It’s also in the habitable zone of its star, meaning it could have liquid water on the surface. There might even be aliens on KOI-456.04 looking back at us and wondering if intelligent life evolved on Earth. The jury’s still out on that one.

Share this:

Talk to anyone about building a new PC, and the question of longevity is going to pop up sooner rather than later. Any time someone is dropping serious cash for a hardware upgrade they’re going to have questions about how long it will last them, especially if they’ve been burned before. But how much additional value is it actually possible to squeeze out of the market by doing so — and does it actually benefit the end-user?

Before I dive in on this, let me establish a few ground rules. I’m drawing a line between buying a little more hardware than you need today because you know you’ll have a use for it in the future and attempting to choose components for specific capabilities that you hope will become useful in the future. Let me give an example:

If you buy a GPU suitable for 4K gaming because you intend to upgrade your 1080p monitor to 4K within the next three months, that’s not future-proofing. If you bought a Pascal GPU over a Maxwell card in 2016 (or an AMD card over an NV GPU) specifically because you expected DirectX 12 to be the Next Big Thing and were attempting to position yourself as ideally as possible, that’s future-proofing. In the first case, you made a decision based on the already-known performance of the GPU at various resolutions and your own self-determined buying plans. In the second, you bet that an API with largely unknown performance characteristics would deliver a decisive advantage without having much evidence as to whether or not this would be true.

Note: While this article makes frequent reference to Nvidia GPUs, this is not to imply Nvidia is responsible for the failure of future-proofing as a strategy. GPUs have advanced more rapidly than CPUs over the past decade, with a much higher number of introduced features for improving graphics fidelity or game performance. Nvidia has been responsible for more of these introductions, in absolute terms, than AMD has.

Let’s whack some sacred cows:

DirectX 12

In the beginning, there were hopes that Maxwell would eventually perform well with DX12, or that Pascal would prove to use it effectively, or that games would adopt it overwhelmingly and quickly. None of these has come to pass. Pascal runs fine with DX12, but gains in the API are few and far between. AMD still sometimes picks up more than NV does, but DX12 hasn’t won wide enough adoption to change the overall landscape. If you bought into AMD hardware in 2013 because you thought the one-two punch of Mantle and console wins were going to open up an unbeatable Team Red advantage (and this line of argument was commonly expressed), it didn’t happen. If you bought Pascal because you thought it would be the architecture to show off DX12 (as opposed to Maxwell), that didn’t happen either.

Now to be fair, Nvidia’s marketing didn’t push DX12 as a reason to buy the card. In fact, Nvidia ignored inquiries about their support for async compute to the maximum extent allowable by law. But that doesn’t change the fact that DX12’s lackluster adoption to-date and limited performance uplift scenarios (low-latency APIs improve weak CPU performance more than GPUs, in many cases) aren’t a great reason to have upgraded back in 2016.

DirectX 11

Remember when tessellation was the Next Big Thing that would transform gaming? Instead, it alternated between having a subtle impact on game visuals (with a mild performance hit) or as a way to make AMD GPUs look really bad by stuffing unnecessary tessellated detail into flat surfaces. If you bought an Nvidia GPU because you thought its enormous synthetic tessellation performance was going to yield actual performance improvements in shipping titles that hadn’t been skewed by insane triangle counts, you didn’t get what you paid for.

DirectX 10

If you snapped up a GTX 8xxx GPU because you thought it was going to deliver great DX10 performance, you ended up disappointed. The only reason we can’t say the same of AMD is because everyone who bought an HD 2000 series GPU ended up disappointed. When the first generation of DX10-capable GPUs often proved incapable of using the API in practice, consumers who’d tried to future-proof by buying into a generation of very fast DX9 cards that promised future compatibility instead found themselves with hardware that would never deliver acceptable frame rates in what had been a headline feature.

This list doesn’t just apply to APIs, though APIs are an easy example. If you bought into first-generation VR because you expected your hardware would carry you into a new era of amazing gaming, well, that hasn’t happened yet. By the time it does, if it does, you’ll have upgraded your VR sets and the graphics cards that power them at least once. If you grabbed a new Nvidia GPU because you thought PhysX was going to be the wave of the future for gaming experiences, sure, you got some use out of the feature — just not nearly the experience the hype train promised, way back when. I liked PhysX — still do — but it wound up being a mild improvement, not a major must-have.

This issue is not confined to GPUs. If you purchased an AMD APU because you thought HSA (Heterogeneous System Architecture) was going to introduce a new paradigm of CPU – GPU problem solving and combined processing, five years later, you’re still waiting. Capabilities like Intel’s TSX (Transaction Synchronization Extensions) were billed as eventually offering performance improvements in commercial software, though this was expected to take time to evolve. Five years later, however, it’s like the feature vanished into thin air. I can find just one recent mention of TSX being used in a consumer product. It turns out, TSX is incredibly useful for boosting the performance of the PS3 emulator RPCS3. Great! But not a reason to buy it for most people. Intel also added support for raster order views years ago, but if a game ever took advantage of them I’m not aware of it (game optimizations for Intel GPUs aren’t exactly a huge topic of discussion, generally speaking).

You might think this is an artifact of the general slowdown in new architectural improvements, but if anything the opposite is true. Back in the days when Nvidia was launching a new GPU architecture every 12 months, the chances of squeezing support into a brand-new GPU for a just-demonstrated feature was even worse. GPU performance often nearly doubled every year, which made buying a GPU in 2003 for a game that wouldn’t ship until 2004 a really stupid move. In fact, Nvidia ran into exactly this problem with Half-Life 2. When Gabe Newell stood on stage and demonstrated HL2 back in 2003, the GeForce FX crumpled like a beer can.

I’d wager this graph sold more ATI GPUs than most ad campaigns. The FX 5900 Ultra was NV’s top GPU. The Radeon 9600 was a midrange card.

Newell lied, told everyone the game would ship in the next few months, and people rushed out to buy ATI cards. Turns out the game didn’t actually ship for a year and by the time it did, Nvidia’s GeForce 6xxx family offered far more competitive performance. An entire new generation of ATI cards had also shipped, with support for PCI Express. In this case, everyone who tried to future-proof got screwed.

There’s one arguable exception to this trend that I’ll address directly: DirectX 12 and asynchronous compute. If you bought an AMD Hawaii GPU in 2012 – 2013, the advent of async compute and DX12 did deliver some performance uplift to these solutions. In this case, you could argue that the relative value of the older GPUs increased as a result.

But as refutations go, this is a weak one. First, the gains were limited to only those titles that implemented both DX12 and async compute. Second, they weren’t uniformly distributed across AMD’s entire GPU stack, and higher-end cards tended to pick up more performance than lower-end models. Third, part of the reason this happened is that AMD’s DX11 driver wasn’t multi-threaded. And fourth, the modest uptick in performance that some 28nm AMD GPUs enjoyed was neither enough to move the needle on those GPUs’ collective performance across the game industry nor sufficient to argue for their continued deployment overall relative to newer cards build on 14/16nm. (The question of how quickly a component ages, relative to the market, is related-but-distinct from whether you can future-proof a system in general).

Now, is it a great thing that AMD’s 28nm GPU customers got some love from DirectX 12 and Vulkan? Absolutely. But we can acknowledge some welcome improvements in specific titles while simultaneously recognizing the fact that only a relative handful of games have shipped with DirectX 12 or Vulkan support in the past three years. These APIs could still become the dominant method of playing games, but it won’t happen within the high-end lifespan of a 2016 GPU.

Optimizing Purchases

If you want to maximize your extracted value per dollar, don’t focus on trying to predict how performance will evolve over the next 24-48 months. Instead, focus on available performance today, in shipping software. When it comes to features and capabilities, prioritize what you’re using today over what you’ll hope to use tomorrow. Software roadmaps get delayed. Features are pushed out. Because we never know how much impact a feature will have or how much it’ll actually improve performance, base your buying decision solely on what you can test and evaluate at the moment. If you aren’t happy with the amount of performance you’ll get from an update today, don’t buy the product until you are.

Second, understand how companies price and which features are the expensive ones. This obviously varies from company to company and market to market, but there’s no substitute for it. In the low-end and midrange GPU space, both AMD and Nvidia tend to increase pricing linearly alongside performance. A GPU that offers 10 percent more performance is typically 10 percent more expensive. At the high end, this changes, and a 10 percent performance improvement might cost 20 percent more money. As new generations appear and the next generation’s premium performance becomes the current generation’s midrange, the cost of that performance drops. The GTX 1060 and GTX 980 are an excellent example of how a midrange GPU can hit the performance target of the previous high-end card for significantly less money less than two years later.

Third, watch product cycles and time your purchasing accordingly. Sometimes, the newly inexpensive last generation product is the best deal in town. Sometimes, it’s worth stepping up to the newer hardware at the same or slightly higher price. Even the two-step upgrade process I explicitly declared wasn’t future-proofing can run into trouble if you don’t pay close attention to market trends. Anybody who paid $1,700 for a Core i7-6950X in February 2017 probably wasn’t thrilled when the Core i9-7900X dropped with higher performance and the same 10 cores a few months later for just $999, to say nothing of the hole Threadripper blew in Intel’s HEDT product family by offering 16 cores instead of 10 at the same price.

Finally, remember this fact: It is the literal job of a company’s marketing department to convince you that new features are both overwhelmingly awesome and incredibly important for you to own right now. In real life, these things are messier and they tend to take longer. Given the relatively slow pace of hardware replacement these days, it’s not unusual for it to take 3-5 years before new capabilities are widespread enough for developers to treat them as the default option. You can avoid that disappointment by buying the performance and features you need and can get today, not what you want and hope for tomorrow.

Update (6/4/2020): The best reason to revisit a piece like this is to check it for accuracy. After all, “future-proofing” is supposed to be a method of saving money. Nearly two years later, how does this piece hold up compared to the introductions we’ve seen since?

On the CPU front, all of the good gaming CPUs of 2018 are still solid options in 2020. This is more-or-less as expected, since CPUs age more slowly than GPUs these days. An AMD platform from 2018 has better upgrade options than its Intel equivalent, but both solutions are excellent today.

At the same time, however, chipset compatibility doesn’t extend infinitely far into the future. AMD is going to make Zen 3 CPUs available to a few X470 and B450 motherboards, but it’s an unusual state of affairs. You can plug a modern GPU into a ten-year-old motherboard, but CPU / chipset compatibility rarely aligns for more than a few years, even where AMD is concerned.

On the GPU front, Turing is a few months from its second birthday, and all of our predictions about how long it would take ray tracing to break into mainstream gaming have proven true. There’s a solid handful of RTX games on the market that show off the technology reasonably well, but Ampere-derived GPUs will be in-market before ray tracing establishes itself as a must-have feature.

This is only a problem if you buy your GPUs based on what you expect them to do in the future. If you bought an RTX 2080 or 2080 Ti because you wanted that level of performance in conventional games and viewed ray tracing as a nice addition, then you’ll replace Turing with something else and be satisfied with things. If you bought Turing believing it would take ray tracing mainstream, you’ll probably be disappointed.

Share this:

Modern smartphones are incredibly complex, with the ability to display a huge range of content and to navigate a complex set of color gamuts, file formats, and media types. Occasionally, however, some of those capabilities interact with each other in unanticipated ways, and you get a problem like this.

As first spotted by Twitter account Ice Universe, using the wrong wallpaper on an Android phone can send the device into a soft brick. While it isn’t technically dead, the phone will endlessly boot-loop due to Android’s inability to handle the color space used for the following photo. Note: Viewing the photo won’t damage your Android device — just don’t set it as your wallpaper.

Device behavior seems to vary slightly depending on the model and manufacturer. Sometimes people have been able to change their wallpaper before the device crashes or use the TWRP recovery tool, but this appears to be more the exception than the rule. Most of the time, affected users have no choice but to perform a factory reset. Samsung is reportedly working on a fix in UEFI, and Android 11 should also resolve the problem. In the meantime, don’t use this image for wallpaper.

According to developer Davide Bianco, the problem is caused by a lack of support for non-sRGB images in the Android SystemUI itself. This is why you can view the image just fine in-browser, but setting it as a wallpaper will temporarily brick your phone. When SystemUI attempts to map color values, the values in the image above exceed the array size and crash the phone.

In theory, these sorts of images can be used as a booby-trap. Send someone a gorgeous wallpaper, they install it, and boom — their device is now boot-looped. Android 11 will fix the problem by supporting non-SRGB wallpapers without this kind of problem.

Weirdly, not every single Android device is vulnerable to this problem. A Huawei Mate 20 Pro didn’t crash when tested by 9to5Google and OnePlus devices are also rumored to be immune. Products from Samsung, on the other hand, very much aren’t. It’s possible that the specific restrictions or software changes on the Huawei and OnePlus devices allow them to handle this kind of content differently.

Either way, best not to source wallpaper from random people until this problem is resolved, unless you’ve recently backed your phone up. Apple, of course, has had similar problems — on two separate occasions, sending the wrong character to an iPhone has been demonstrated to cause it to crash.

Share this:

Normally, Intel refreshes both its desktop and its HEDT product lines on a yearly cadence, with HEDT tending to launch after the desktop refresh. Since Skylake-X debuted in 2017 the two platforms have used the same architecture, with desktop chips launching first. Now, it looks as though Intel won’t have an HEDT refresh this year.

This is less unusual than it might seem. Since launching the 3960X in 2011, Intel has skipped years on several occasions. There was a nearly two-year gap between the 3960X and the 4960X (Q4 2011, Q3 2013) and the 5960X and 6950X (Q3 2014, Q2 2016). Since the 6950X, Intel has delivered a regular cadence of updates, with Skylake X debuting in 2017, the 9th Gen HEDT refresh in 2018, and last year’s launch of the Core i9-10980XE.

In most cases, these pauses happen at platform boundaries. The X79 supported the 3960X and the 4960X, the X99 anchored the 5960X and 6950X, and the X299 has anchored the Skylake X, Skylake X Refresh, and Cascade Lake families. If Intel is holding off for a new chipset, it would explain why the company doesn’t have an HEDT chip ready for this segment.

Frankly, it makes sense for Intel to hold off at this point. Rocket Lake, Intel’s next-gen 14nm architecture (at least according to rumor) isn’t ready yet, and won’t ship until the end of the year. Theoretically, Intel’s next HEDT part could be based on Ice Lake or Tiger Lake (Sunny Cove and Willow Cove architectures, respectively). Rumor has it that Rocket Lake is the 14nm implementation of Sunny Cove while Tiger Lake is the 10nm variant.

We don’t even know if the next HEDT platform will be built on 14nm or 10nm. If Intel decides to follow standard procedure, it’ll be a 10nm CPU with a few strategic changes to differentiate it from Xeon. If it sticks with the bifurcated approach we’ve seen to the enthusiast market, we could expect a 14nm core with higher base clocks compared to 10nm. Then again, we don’t know what the clock delta will be between Tiger and Rocket. In theory, Intel’s 10nm+ should reduce the clock differential between the two nodes, which has grown rather large.

As for Intel’s competition, we haven’t heard a peep out of AMD regarding Threadripper. After last years’ sprint to 64 cores, we’d expect AMD to keep core counts steady this time around and likely focus on improvements to IPC and clock instead. With Windows already unable to use 128 threads in a single process under the default Windows thread scheduler, there’s not much benefit to pushing past the 64C/128T point.

There’s also no info yet on how many cores future HEDT chips may offer. Current solutions top out at 18 for Intel and 64 for AMD, and that’s obviously a gap that it’s in Intel’s best interest to reduce. Thanks to THG for spotting the slide.