If VT-d requires all this dedicated hardware, and if it doesn't work on consumer mobos, and if it needs some crazy software stack (which Hyper-V in Win10 doesn't support but the Server versions do) I don't get why it's "important" to have on the mainstream desktop processors in the first place.

I do not understand what I do. For what I want to do I do not do, but what I hate I do.

If VT-d requires all this dedicated hardware, and if it doesn't work on consumer mobos, and if it needs some crazy software stack (which Hyper-V in Win10 doesn't support but the Server versions do) I don't get why it's "important" to have on the mainstream desktop processors in the first place.

Well, it works on some consumer motherboards. And it works for some devices, like most recent GPUs.

So, if you are willing to get dirty with excessive Linux and KVM/Qemu fiddling, you can actually run a windows VM in linux with your graphics card. Cool, huh?

So you can run linux and yet still game seamlessly on the same machine. Well, you have to input switch between a different GPUs (like your integrated intel one), but other than that it works. Unless you want reliable sound, that honestly can be even harder. But it *can* work!

So it's neat to have if everything lines up, but other than a GPU, you don't really need bare-metal access to devices from inside a VM unless you are in a real production environment. You don't realize the benefit unless you are at a scale it's hard to imagine a hobbyist achieving.

So, yeah, you are right: If you are shelling out hundreds, if not thousands, on recent multi-port 10G NICs, why are you are thinking about the $50 difference between a Pentium and an i3 and praying that the obscure "Virtualizational supported: HAS" entry for your motherboard even means VT-d, let alone that it actually works in the current firmware for it, if it ever will work at all!

Is the G4620 overclockable? That would be amazing. I don't follow Pentiums much since the venerable G3258 Anniversary Edition.

I think if your budget is that low, the G4620 is a great value. You get 2C/4T for $50 less than the i3 and only sacrifice 300MHz clockspeed (8% difference in clockspeed for 40% reduction in price). Keep in mind the Skylake i3-6100 only runs at 3.7GHz and has the same 3MB cache.

Is the G4620 overclockable? That would be amazing. I don't follow Pentiums much since the venerable G3258 Anniversary Edition.

That's a great question. Tribal knowledge is that CPUs past Sandy Bridge couldn't be OC'd because they don't support base clock tweaking, but I've never owned a newer locked chip so I don't know myself.

After the single Anniversary Edition, my understanding is that Intel hasn't unlocked any other Pentiums, including Skylake and forthcoming Baby Lake models.

derFunkenstein, do you know how a modern locked CPU is overclocked? It used to be that you adjusted the motherboard base clock up in 1MHz increments, but I haven't heard of that method being used in quite some time, not since Ivy Bridge at least.

EDIT: Nevermind, it looks like MB-based overclocking died with Haswell. Strange that I've not had a locked processor in all that time.

My own experiments with the Core i5-6600K and Core i3-6100 have caused me to conclude that bus overclocking is still basically impossible beyond a couple of percent. I don't have any modern Pentiums to test with. ASRock abandoned its SkyOC feature, and TR's experiments with it were problematic, to say the least. I have no proof, but I can't imagine taking a swim in Kaby Lake is any different than dipping your toes into Skylake.

I do not understand what I do. For what I want to do I do not do, but what I hate I do.

Like other people here, I can't really find any justification for the i3 added price over G4560 that isn't niche to the point where said users will also have far deeper pockets to avoid the discussion completely.

I wouldn't be so quick to write off AVX support. It isn't a big deal like HT, but I'd definitely count it in the same batch of speed boosts with the clock and cache, even for gaming. The gains aren't big for most gaming workloads, but they're often easy for devs to get, and they do exist. I'd expect them to be a bigger deal in the future, as usual.

Speaking of this, I'd be really interested to see Pentium vs i3 gaming benchmarks at equal clocks, now that they both have HT. Isolating cache and AVX effects could be very informative.

I wouldn't be so quick to write off AVX support. It isn't a big deal like HT, but I'd definitely count it in the same batch of speed boosts with the clock and cache, even for gaming. The gains aren't big for most gaming workloads, but they're often easy for devs to get, and they do exist. I'd expect them to be a bigger deal in the future, as usual.

Speaking of this, I'd be really interested to see Pentium vs i3 gaming benchmarks at equal clocks, now that they both have HT. Isolating cache and AVX effects could be very informative.

I'd honestly be kinda surprised if many games (if any) take advantage of AVX support...

VT-d, D for Device: ..Most Intel CPUs support this, but until very recently Intel seems to have wanted to keep VT-d and ECC support separate on the lower end for segmentation reasons. Evidently this still applies on for some of the newer Pentiums, but not the newer i3s. Hard to keep track of it, as always, consult ARK.

tl;dr: VT-x is a win (sometimes even a requirement) for everyday hobbyist/tinkering VMs. VT-d is not. It is something you should only care about if you already know you need it and how you're going to use it. It is part of *FULL* stack decision, not a separate CPU feature.

Only the newer "E" versions of the i3 have ECC support in addition to VT-d.

I look at it somewhat differently.It's more a matter of which OS is your underlying OS or "base" OS. IF it's Windows (as a base) then you can always drop-back into that Windows base to get full system resources - then you don't really need VT-d.IF it's something else (usually a flavor of Linux for your base), then you will probably want VT-d to pass-through hardware to a virtual machine.

-I also have a hard time "seeing" VT-d with OUT ECC memory, that's just asking for trouble IMO. ..and if you are passing-through hardware then you are concerned with performance, leaving ONLY the i3-7101E as the processor of choice in the budget non-Xeon segment (..and even then you've got limitations with m-board choices as mentioned below).btw, I'm not seeing pro-level chipsets for Kaby Lake yet, which would mean going back to Skylake C236 and C232 m-boards (..which won't support Optane and a few other features of Kaby Lake).

I'd honestly be kinda surprised if many games (if any) take advantage of AVX support...

I'd be much more surprised if they didn't, since compilers are smart enough to sort lots of it out themselves. When you can just flip a switch and likely gain a few percent here or there on the majority of PCs, the question becomes more about why not than why. There are a fair number of good reasons a dev may not want to flip that switch, but most of them I can think of either only apply to a particular part of a game's code or aren't the sort of thing that game devs are known for being concerned about in the first place.

They should call it the Pentium 5. I mean, it's worth it. Great chip, great value. The G4560 is even better as far as value for money is concerned. Have we ever had such a great entry level CPU since the wildly overclockable Celeron 300A?

They should call it the Pentium 5. I mean, it's worth it. Great chip, great value. The G4560 is even better as far as value for money is concerned. Have we ever had such a great entry level CPU since the wildly overclockable Celeron 300A?

Nah, this is nothing like the 300A @ 450MHz was nipping the heels of a P3 500 costing 4x as much or even as remarkable as cheapo C2D OCing. We are just completely burned out by Intel's market segmentation games that anything that looks halfway interesting stands out a lot.

They should call it the Pentium 5. I mean, it's worth it. Great chip, great value. The G4560 is even better as far as value for money is concerned. Have we ever had such a great entry level CPU since the wildly overclockable Celeron 300A?

Nah, this is nothing like the 300A @ 450MHz was nipping the heels of a P3 500 costing 4x as much or even as remarkable as cheapo C2D OCing. We are just completely burned out by Intel's market segmentation games that anything that looks halfway interesting stands out a lot.

The real reason why Celeron 300As were blasting away the Pentium IIs and Katmai Pentium IIIs back in the day was because they had on-die L2 cache which was smaller (128KiB) but faster than Off-die Cache on Pentium II(operated at 1/2 of the speed of the CPU a.k.a back-side bus) and they weren't that mainly application that used the 512KiB on those Pentium IIs. SSE was still new hotness for the Katmai Pentium IIIs and not many applications used it yet.

It's more a matter of which OS is your underlying OS or "base" OS. IF it's Windows (as a base) then you can always drop-back into that Windows base to get full system resources - then you don't really need VT-d.IF it's something else (usually a flavor of Linux for your base), then you will probably want VT-d to pass-through hardware to a virtual machine.

No, I am afraid not.

You shouldn't want it unless you can use it, and you can't use it unless:

1) Your motherboard and its firmware support it correctly2) Your device has the feature set it needs 3) Your device is connected to the motherboard in a way that isn't exclusionary/limited because of various bus setups (Are some things still used shared interrupts instead of MSI? Is your dual/quad-port NIC implemented in such a way that each PHY/MAC can be independently addressed? Do you have two devices on the same PCI bus [they can't be in separate domains]? Which disk is on which SATA/SAS/other controller? Is it port-multiplied? Etc...)

Furthermore, 4) you won't *really* be using it because the benefits aren't particularly meaningful unless you are at scale: Generally, such as for hobbyists, you're not worried about being CPU-bound when it comes to your IO.

---

The first one is tough. A lot of people don't, or can't easily be sure that they do. It's difficult to get good feedback for regular enthusiasts boards, precisely because of the very confusion I was trying to cut through: A lot of people not only don't understand what VT-d does, but readily conflate it with VT-x.

The manufacturers don't typically do a good job of explaining this sort of feature support anyway, and it's such a niche thing in that class of hardware that you can't even be sure if they aren't wrong and their latest firmware borks it.

2) Is also tough. Do you know what you are even looking for? Or how to look for it? Most people don't. They don't know the chip on their NIC, and if they did, they might not know how to look up the relevant part of the feature list/data sheet for it.

VT-d doesn't just "work". Your broadcom/realtek NICs don't have function level reset. Newer Intel NICs usually do, but unless you buy an actual server board you're only going to have one, maybe two, of those integrated in the first place. Get one of those nice Intel based dual/quad ports ones? Oh, it probably doesn't unless you bought a newer design for several hundred that supports even further IO virtualization features you still can't use with just basic VT-d working, like SR-IOV.

The only relevant case I can think of is the GPU thing, because they do have support and it's a use case I can actually understand. Even then, I mean, that's cool I guess, but unless that's specifically what you are going for, what are you doing?

---

Anyway, I use linux. I use VMs. I have a setup that properly supports VT-d. I don't use it. Primarily because of 4) but also because of 2).

It's not about OS choice. It's about what you are trying to do and whether or not your *ENTIRE* setup supports it. Do you even have multiple NICs running to switch from your VM box? Is network IO being CPU-bound (and I don't mean higher layers like SSH doing chacha20-poly1305 and choking you)? Is your disk throughput artificially being held back?

If you have those concerns, you know you do. If you don't, no, you not only don't need VT-d but your setup probably cannot even support it.

I'd honestly be kinda surprised if many games (if any) take advantage of AVX support...

I'd be much more surprised if they didn't, since compilers are smart enough to sort lots of it out themselves. When you can just flip a switch and likely gain a few percent here or there on the majority of PCs, the question becomes more about why not than why. There are a fair number of good reasons a dev may not want to flip that switch, but most of them I can think of either only apply to a particular part of a game's code or aren't the sort of thing that game devs are known for being concerned about in the first place.

Compilers don't sort it out for themselves. You tell the compiler the oldest instruction subset you want to target, and it generates "lowest common denominator" code which will run on anything from that generation or newer.

To fully leverage new instructions while also supporting older processors, the application needs to have multiple code paths (or multiple complete binaries) built with different compiler options. On launch the application needs to decide what instruction extensions are supported by the CPU it is running on, and configure itself (or load the appropriate version of the program) to select the code paths which have been optimized for that CPU.

While not rocket science by any means, it's not completely trivial either. It also increases the amount of testing required, since differently optimized code can mask/expose subtle bugs, especially in multi-threaded applications. I could easily see this sort of optimization getting axed in the face of tight development budgets and impossible deadlines.

I could easily see this sort of optimization getting axed in the face of tight development budgets and impossible deadlines.

It's even tough for the compiler writers, GCC builtins/instrinics are notorious. If it's such a persistent area of pain for them, time and time again, yeah, there's no way games, of all things, are going to rush into that mess.

And, of course, being able to vectorize 4 or 8 wide for your operations on single/double floats also doesn't do squat if the algorithms expressed in your code never actually do anything that could ever take advantage of that.

And that's really all AVX does. It's primarily SSE with another 128bits you can SIMD across.

So if you don't have a situation such as having 8 32-bit floats you want to do the same thing to without any dependencies, it can never help you.

And that's a pretty unusual, specialized thing. So unusual that most programmers will never see code that does anything like that. So specialized that we have hardware (GPUs) to accelerate it already.

If you don't do it, you can't use it. Simple as that. Compiler switches aren't magic.

I'd honestly be kinda surprised if many games (if any) take advantage of AVX support...

I'd be much more surprised if they didn't, since compilers are smart enough to sort lots of it out themselves. When you can just flip a switch and likely gain a few percent here or there on the majority of PCs, the question becomes more about why not than why. There are a fair number of good reasons a dev may not want to flip that switch, but most of them I can think of either only apply to a particular part of a game's code or aren't the sort of thing that game devs are known for being concerned about in the first place.

Compilers don't sort it out for themselves. You tell the compiler the oldest instruction subset you want to target, and it generates "lowest common denominator" code which will run on anything from that generation or newer.

To fully leverage new instructions while also supporting older processors, the application needs to have multiple code paths (or multiple complete binaries) built with different compiler options. On launch the application needs to decide what instruction extensions are supported by the CPU it is running on, and configure itself (or load the appropriate version of the program) to select the code paths which have been optimized for that CPU.

While not rocket science by any means, it's not completely trivial either. It also increases the amount of testing required, since differently optimized code can mask/expose subtle bugs, especially in multi-threaded applications. I could easily see this sort of optimization getting axed in the face of tight development budgets and impossible deadlines.

This, exactly. Compilers won't generate binaries that'll run multiple code paths without the devs spending time to develop and test all paths, and then have the compiler build all of them.

The Intel erasure code libraries are a good example, but one look at the source will tell you why: it's hand-coded assembly. I can't imagine many (if any) game devs go through that level of optimization for something that probably very rarely gives them much of a boost. How much widely-vectorizable code do we really expect to see in a game engine? Loop unrolling and common extensions gets you 90% of the way there for many situations...

I thought compilers could figure out some minor multi-path code on their own, of course at expense of testing effort and potential bugginess, but looking through some docs, it looks like GCC really can't do any such thing. I wonder what I was looking at before that made me think otherwise? I haven't looked into it much myself (obviously) because I'm more interested in consistency (at least for now), but game devs on the whole are infamous for valuing performance over reliability.

AVX can still be useful for games, though. Games do all kinds of different work, and there's a big gap between work being non-vector and being suitable for a GPU. Even in rendering (as opposed to game logic), a lot of stuff that should really be done on the GPU doesn't make it that far. In 2015, a AAA game was released that appears to still do shadow mapping on the CPU, for probably the most ridiculous example. Game logic may include various things like physics that can be solidly vector, but that doesn't mean you want to put them on a GPU. Many games probably don't spend enough time in particular segments of vectorizable code to make AVX worthwhile, but then there are things like FO4's shadows and PCars' CPU physics. In those cases, to name two, I'd still be much more surprised if AVX were absent than not.

Back to the subject of the thread, even if AVX were to be only worth a couple percent when averaged across all games, it seems worth mentioning. After all, we're already talking about tiny differences between these CPUs.

@AVX in games- The Pentium G3258 didn't have AVX and it stood it's ground pretty well clock-for-clock against the i7-4790K in Crysis 3 that TR reviewed. You can obviously look at other reviews for more game testing there. And granted that was a few years ago, but I doubt the landscape of AVX use in games has changed much since then. The G3258's biggest handicap in modern games appears to be it's 2 cores. I think we can safely say that AVX support doesn't have much influence in gaming. You're still mostly going to be comparing clock speed differences for performance and price. And the price difference between the new Pentiums and the i3 lineup is pretty large.

As a G3258 owner, I'm well aware that it isn't holding up to 2016 gaming so well as it looked like it would in 2014, though of course the data on newer games is a bit sparse. I expect the bulk of that is due to a different multithreading architecture in games (one that much prefers having four threads to work with), but I wouldn't rule out AVX as a factor yet.

Ark says "SSE4.1/4.2" for Pentiums and "SSE4.1/4.2, AVX 2.0" for i# parts, so it looks like no AVX at all on the low end.

I am particularly annoyed by Intel's market segmentation games with this, because if they enabled AVX on the low end like they should (starting at Sandy), game devs would be able to start shipping AVX-only code pretty soon.

It's more a matter of which OS is your underlying OS or "base" OS. IF it's Windows (as a base) then you can always drop-back into that Windows base to get full system resources - then you don't really need VT-d.IF it's something else (usually a flavor of Linux for your base), then you will probably want VT-d to pass-through hardware to a virtual machine.

No, I am afraid not.

You shouldn't want it unless you can use it..

..It's not about OS choice.

.."afraid not" all you want: but that is the way I look at it.

It's a "top-down" look - specifically use-case of either desire or need. Of course you'll need to meet the hardware and software requirements to be able to use it, that's an obvious given. It may be a non-obvious technical issue for any given user of course, but that was NOT the point I was trying to make. (..and I personally found that it wasn't at all difficult with a bit of planning.)

My perspective suggests that VT-d is a non-issue for most users: most are going to be bare-metal booting into Windows (or the latest macOS) and will use something like Virtual Box or VMware (even Hyper-V "user" use is comparatively rare) riding on-top of that OS to boot up a VM that doesn't need to grab hardware resources OR if that user needs to grab those hardware resources then they can always close the VM and work with their base OS. That probably describes more than 95% of users.

IF however you bare-metal boot into Linux, you aren't in the "user" majority (..like, at ALL) - and even then, it's not if you bare-metal boot into a Linux distro. but rather if you bare-metal boot into a Linux distro. and are going to use Qemu/KVM (..which is what, less than half of the 2% "user" base that's attributed to Linux "user" use?). If you are contemplating VM use in that context (Qemu/KVM) - then you are going to likely WANT (not necessarily need), VT-d. It's at that point where you would start your planning. (Or XEN, or ESXI.)

Effectively it's a routing mechanism that should work for most "users". Do you bare-metal boot into Windows? IF so, VT-x should be all you need. IF not, then you'll likely want VT-d/IOMMU as an option and you'll need to plan accordingly (..and I'm not saying that will be an easy "road to haul", even if I didn't find it particularly difficult - and where I did have all sorts of problems with other issues related to various Linux distro.'s and even Windows.)

Within this perspective then it (largely) STARTS with the choice of OS.

-and btw, you can most certainly want things you can't use (..should is largely irrelevant). But you already knew that.