Intel has confirmed it will start to sell discrete GPUs in the year 2020.
News of Chipzilla’s plans appeared in a post by analyst Ryan Shrout, who said that Intel CEO Brian Krasnich last week told an analyst event about the company’s plans.
Intel confirmed Shrout’s piece, telling us "We’re pleased to confirm our first …

COMMENTS

Beat them on packaging?

Not sure about this. nVidia perhaps yes, but AMD has been doing very well in this regard recently. For example Radeon Fury, Vega GPUs with HBM, and EPYC/Threadripper on the CPU side. Correct me if I'm wrong, but intel are only just getting into this game and they had to use an AMD GPU die to do it.

Re: Beat them on packaging?

Intel's biggest problem will be delivering drivers that don't suck, something they've had a lot of problems doing in the graphics world even at lower performance levels. They can make the best GPU hardware there is but if the drivers don't work people won't buy them.

Re: Beat them on packaging?

They only have to suck less than the other two. I've seen plenty of sucky drivers from both over the years.

"... if the drivers don't work people won't buy them."

I think the phrase here is "citation needed". There are a handful of people who have almost religious fervour for grovelling over the latest high-end hardware. They care, but they are only 0.0001% of the market.

Always good to have competition to rein in that nVidia/AMD duopoly

Re: Always good to have competition to rein in that nVidia/AMD duopoly

And the rest. Rendition, S3, 3DLabs, Cirrus, Tseng and then there were all the optimisers like VIdeologic (still around in a different guise), Guillemot and thousands more.

The great shake out did for them. A combination of the move to 3D accelerators (well, more than DMA engines with an ALU run the middle) and the eventual (and inevitable) death of the board/optimiser companies.

Building GPUs in now a job for a 1000 people. Only the 5 richest kings in Europe can afford it.

There are many other GPU creators out there. Qualcomm, ARM, Imagination. They just don't play in the PC space.

What Intel got their hands on a few years back was the engineering talent from the once mighty 3DLabs (Ziilabs -> Intel). Maybe they've done something with it but as a company Intel are becoming the new IBM of the 80's. Slow moving, single minded and inflexible.

The last 5 years has seen Intel best everybody on process. The industry as whole is having problems scaling.

Re: Always good to have competition to rein in that nVidia/AMD duopoly

> There are many other GPU creators out there. Qualcomm, ARM, Imagination

And Apple and Google too. The former having ended its relationship with Imagination Technologies. Google and Apple likely looking at mobile GPUs that do more than shade polygons, but can also be more efficiently put to other tasks such as DSP and object recognition.

Re: Always good to have competition to rein in that nVidia/AMD duopoly

Google isn't doing graphics. Look at the papers on the Tensor chips they've been doing and you can see that while the architectures are similar (SIMD machines with massive high bandwidth memory access), there are distinct differences between a Tensor machine and a graphics card. But from just the papers Google has published you can get an estimate of what their Tensor chips run, and a reasonable estimate is that those chips alone, not counting the HBM, assembly, and all else, run much more than a maxed out 1080 Ti card. Google may be large as companies go, but they're still not large enough to get the massive discounts you get from volume Si production.

Re: Always good to have competition to rein in that nVidia/AMD duopoly

The big difference between desktop and mobile GPUs is that a mobile GPU is still a GPU. Desktop GPUs are about large scale cores and most of the companies you mentioned in the mobile space lack the in-house skills to handle ASIC cores. When you license their tech, usually you’re getting a whole lot of VHDL (or similar) bits that can be added to another set of cores. ARM I believe does work a lot on their ASIC synthesis and of course Qualcom does as well, but their cores are not meant to be discrete parts.

Remember most IP core companies struggle with high speed serial busses which is why USB3, SATA and PCIe running at 10Gb/sec or more is hard to come by from those vendors.

AMD, Intel and NVidia have massive ASIC simulators that cost hundreds of millions of dollars from companies like Mentor graphics to verify their designs on. Samsung could probably do it and probably Qualcomm, but even ARM may have difficulties developing these technologies.

ASIC development is also closed loop. Very few universities in the world offer actual ASIC development programs in-house. The graduates of those programs are quickly sucked up by massive companies and are offered very good packages for their skills.

These days, companies like Google, Microsoft and Apple are doing a lot of ASIC design in house. Most other new-comets don’t even know how to manage an ASIC project. It’s often surprising that none of the big boys like Qualcomm have sucked up TI who have strong expertise in DSP ASIC synthesis. Though even TI has struggled A LOT with high speed serial in recent years. Maxwell’s theory is murder for most companies.

So most GPU vendors are limited to what they can design and test in FPGA which is extremely limiting.

Oh... let’s not even talk about what problems would arise for most companies attempting to handle either OpenCL or TensorFlow in their hardware and drivers. Or what about Vulcan. All of these would devastate most companies. Consider that AMD, Intel and NVidia release a new driver almost every month for GPU. Most small companies couldn’t afford that scale of development or even distribution.

Re: Always good to have competition to rein in that nVidia/AMD duopoly

"run much more than a maxed out 1080 Ti card."

I wouldn't compare a 1080ti with a TPU. No-one who is planning on using one would substitute the other. Even Titans and Vegas are not really comparable, lack of precision and cost aren't really on the same scale.

Comparing a TPU with a Tesla is more viable, since they would be used for equivalent workloads.

You could possibly use half a dozen Titans to do Tesla like stuff, but why would you bother?

In general people either have the budget, so want the best in the smallest form factor (TPU or Tesla), or don't and then want the best bang for buck (retail GPU).

It will be interesting to see what Intel come out with, whether they are aiming at the retail or industrial end of the spectrum.

The term GPU is much older than nvidia making it popular (in 1999). The i740 was 1998. So GPU was a thing, and the i740 probably qualified (as a bad one) even if people tended to call them graphics accelerators at the time instead.

So... just as AMD are talking about putting a real GPU into the chip directly, Intel want to make a GPU that goes onto a plug-in board?

I think they missed this boat too, which is ironic because they did make integrated graphics chipsets for over a decade before this announcement. They were just always pants.

Honestly, guys, just make a decent GPU. 2020 will be too late. You'll probably hold the processor market for a while but you've left everything else far too late.

To be honest, we're following the "floating point co-processor" model. At the moment graphics GPUs are basically add-in cards on expansion slots that you have to talk to with non-standard protocols and every one has a different instruction set. Next they'll be a separate standardised socket on the motherboard, next to the CPU and sharing its cooling, and things like Vulkan APIs determining a base instruction set. Before you know it, every chip will really be a GPU first, with a legacy CPU inside.

AMD, for once, are following the right path, while Intel just flounder like they usually do.

I'm not convinced the AMD route is the right way to go to be honest. I feel they're clutching at straws a bit in a way to differentiate themselves from Intel/nVidia.

While a single die makes sense for the mobile sector (including laptops), external cards for the desktop/server markets make much more sense. People (read: gamers) are much more likely to upgrade their GPUs than their CPUs (which will usually involve new motherboard and RAM too). Having add-in cards also makes it far easier to run multiple GPUs in parallel, and, again, update those cards as technology improves.

" People (read: gamers) are much more likely to upgrade their GPUs than their CPUs"

Which is why I see the eventual evolution towards a standardised "GPU socket" rather than a PCIe slot.

Every PCIe card at the moment has a different height, thickness, cooling and power requirements. By putting it straight on the motherboard, you can standardised it, bring it into the standard cooling and power arrangements, keep it close to the CPU and RAM, and separate it out from PCIe peripherals. You can then upgrade it individually.

But over time we'll hit a limit (e.g. X amount of PCIe x16 lanes) and then they'll just get folded into the CPU directly and upgrading them individually will hardly matter. All that will then happen if you'll get "dual-GPU" boards and expansions that include TWO such sockets. Four such sockets. Etc. And you end up with the same "I can have 12 10,000 core GPUs" but you also get 12 controlling CPUs for free by doing so.

If anything, I see the CPU disappearing into the GPU, not the other way around. People will still buy a dozen GPUs for their mining rig, they just won't care that there are also a dozen bog-standard CPU issuing the commands to them inside the same chip.

Single die vs plugin card

Sure *gamers* will upgrade their GPU - but the really *big* market for GPUs is not processing graphics!

We have pretty much run out of steam improving single-threaded performance. Multiple cores is the only way to improve performance. Once you start doing that at scale, you can drastically reduce the cost of each core by not trying to squeeze every last drop of performance out of it (you also design out Spectre et al). Once standard desktop software needs a GPU to perform well, everybody is going to want one - and they won't want it on a separate card.

The commentard who compared GPUs to floating point hardware had it exactly right.

Re: Single die vs plugin card

"Sure *gamers* will upgrade their GPU - but the really *big* market for GPUs is not processing graphics!"

Well, nVidia disagree with you there.

2019 Q1 results (Jan 2018 - Mar 2018)

Growth is change from smae period last year.

Gaming revenue: $1.7B, 68% growth

Datacenter: $700M, 71% growth

Professional visualisation: $250M, 22% growth

Automotive: $145M, 4% growth

Crypto miners: $290M.

So roughly two thirds of nVidia revenue (not just GPUs) is *only* processing graphics. Automotive is also processing graphics, but doing other stuff too, same as data centre. So counting dual use as not doing graphics, the majority use case is still crunching numbers for graphics.

"Once standard desktop software needs a GPU to perform well"

Did I miss something? Isn't that ALREADY the case, which is why CPUs have had a GPU on them for a decade or more?

The majority of GPUs I own are not used for graphics. But I'm a pretty odd case, and most of my usage of them doesn't need a lot of grunt from the rest of the system, as they are being run on hardware several generations behind (2Ghz Xeons, DDR3, x4 PCIe2 slots) since they get the same performance on shiny new kit as the bottleneck is still on the card itself.

Sounds good in theory till you have to upgrade your motherboard to get a new GPU simply because it doesnt have the right power needs, or intel/amd/nvidia have decided to change the gpu socket pin numbers.

@Lee

I would go one stage further. I can see the GPU becoming not just a co-processor on the same die, but execution units in a super-scalar processor. Once this happens, writing code for the GPU will be much easier, as the compilers will include the ability to compile code directly, rather than the rather haphazard methods being used now.

Hopefully not just the high end..

I'm hoping these cards will also cover the lower end, and be open source. Yes, Intel graphics are somewhat underpowered compared to Nvidia and AMD, but they have decent documentation and open source drivers enabling a large amount of open source support, especially on the BSDs.

For OS that reject binary blobs (hello, OpenBSD), Nvidia support died not far off a decade ago - Xorg is still using the nv driver. AMD support is quite up to date, and Intel support is doing quite well. Given a lot of developers use laptops, it would be useful to have a discrete GPU version of those chipsets when using a CPU that doesn't have a GPU built in.

Re: Hopefully not just the high end..

Not very likely to happen. And even if it does, Intel has the attention span of a 5 year old with ADHD on a diet of triple shot espressos laced with meth. It's products won't be supported long enough to get any sort of following.

I think we are going to

see an advancement of their work on clusterability (is that a word?) of processor units that they started with Larabee and Xeon Phi, but with GPUs rather than x86 based cores.

The first batch will probably be very expensive, and very boring, blue shrouded stuff with big TDPs and will probably be a bunch of multi core GPUs set up in a symmetrical bus with a 16GB bank of Optane stuck on it and aimed at enterprise.

Larrabee - what goes around comes around

Referring to Xeon Phi, remember that this descended from Larrabee which was Intels multi-core graphics card https://en.wikipedia.org/wiki/Larrabee_(microarchitecture)

So it is kind of ironic that Intel is coming full circle here.

One thing that is interesting is for large visualizations Intel is pushing CPU based rendering, using Ospray https://www.ospray.org/

I believe that will remain very relevant at the high end.

Looking at Nvidia, there success in the market is not an overnight thing, or an accident. Years ago they made conscious efforts to move into the compute market, and sustained that with Nvidia centres of excellence, continued developments of CUDA, NVlink etc. etc.

So are we in the future going to see the rather bizarre situation of Nvidias chips being bought for compute mainly, and Intel producing the most popular graphics chips?

Also, oh joy! Server chipsets with intel graphics.

I don't know if it will happen, but I can dream. Integrated BMC chipsets with an Intel graphics chipset.

I realise they're servers, and servers don't require graphics. Even so it would be nice to have more than a G200e with 8MB RAM running over a PCI-e 1x link (slower than AGP...). Something with better acceleration and PCI-e compression. Haven't checked how the more modern AST chipsets are, but if you briefly need to run even a vaguely modern desktop the G200e is just glacially slow.

Suspect it won't happen with specialists like AST sewing up the market.

Intel's effort in anything other than x86 processors have not been particularly successful and often result in them selling it off or just abandoning it. So i don't hold out much hope that them taking at least 18 months to release a GPU will amount to much.

Intel740 was their first discrete GPU

Back in 1998, Intel740 was their first discrete GPU.

As its actual embedded graphics solutions, i740 was plagued with driver bugs.

There is no point to hire the better hardware guys to build it's hardware, if Intel is going to keep it's tradition of supporting its graphic solutions poorly and for short periods, on the driver/software front.