Wanted to start a discussion here on the heels of Nvidia's release of the Titan V card based on the Volta architecture. I put my order in yesterday and it's going to be 3-5 days until I can report if things work as expected but wanted to ask here if anyone knows whether the client is fully optimized to work with Volta already?

... but wanted to ask here if anyone knows whether the client is fully optimized to work with Volta already?

The Linux special application is the one able to get the most work out of it at this stage, next in line is the SoG application if running Windows (your GTX 1080Tis could produce a lot more work than they do with the right command line values).
However considering it's just been released i'd be truly amazed if anyone has had a go at working on code that will take advantage of it's improved CUDA architecture, let alone make use of it's Tensor cores.

EDIT- it looks like NVidia have already released a driver with Titan V support, but there are already reports of issues (TDR errors, blanking during Bluray playback and display blanking on Gsync monitors under certain conditions).
Most of the issues appear so far to be video related- and while this is a video card, it is really a compute card video output. The only possible show stopper for crunching could be the TDR issues. Are they occurring in games/video display, or when crunching?
Could be a while before the driver support is mature.Grant
Darwin NT

Eric is complaining that it doesn't have enough video memory to use as a front-line telescope signal pre-processor.

I wondered about that myself. Why not 16GB of memory? And why not GDDR6? Why are you lining the pockets of your competitor with use of HBM2 memory?

My cynicism says that 6 months hence will be the AIB versions of the board and performance that embarrasses the Nvidia sourced product. Like what happened with the Titan Xp and the 1080Ti.Seti@Home classic workunits:20,676 CPU time:74,226 hours

Because it's not being produced in volume yet.
Noises are next year, and maybe in the 1st half there should be products using it. Maybe.
Jan 17th, CES 2018 is when people are anticipating an announcement by NVidia on Volta based consumer video cards using GDDR6, and when they will be made available.

Why are you lining the pockets of your competitor with use of HBM2 memory?

Because at $10,000+ for each of the cards they sell that uses it (and now an extra $3,000 per card for chips that otherwise wouldn't have been used), they make themselves a lot more money in the long run than if they had waited for GDDR6 to be ready before releasing Volta.Grant
Darwin NT

Because it's not being produced in volume yet.
Noises are next year, and maybe in the 1st half there should be products using it. Maybe.
Jan 17th, CES 2018 is when people are anticipating an announcement by NVidia on Volta based consumer video cards using GDDR6, and when they will be made available.

Why are you lining the pockets of your competitor with use of HBM2 memory?

Because at $10,000+ for each of the cards they sell that uses it (and now an extra $3,000 per card for chips that otherwise wouldn't have been used), they make themselves a lot more money in the long run than if they had waited for GDDR6 to be ready before releasing Volta.

I wonder how many cards ($10K or $3K) actually get sold between announcement now and next year. Part of the high retail cost for these cards is the very large proportion of costs allotted to the HBM2 memory which is still very expensive because of the very poor yields they are achieving. And the yields for the NV100 chip can't be great either because it is HUGE. If you had waited till next year when GDDR6 was viable and possibly a die-shrink because yields on 10nm process chips would have finally improved, you might have been able to sell more cards at a better price point. I think most of this announcement is just PR for Nvidia and is not going to amount to much adding to their bottom line. My $0.02 of gazing into my crystal ball.Seti@Home classic workunits:20,676 CPU time:74,226 hours

I wonder how many cards ($10K or $3K) actually get sold between announcement now and next year.

The $10,000+ cards have been shipping since late 2016 (prices for those early GPUS were roughly $19,000 each). Flogging off the $3,000 cards just means they're not writing off as much silicon due to wastage (it's been suggested these cards are ones that didn't pass final inspections for the V100 Tesla cards, but are suitable for what the Nivida Titan V will be).

Part of the high retail cost for these cards is the very large proportion of costs allotted to the HBM2 memory which is still very expensive because of the very poor yields they are achieving. And the yields for the NV100 chip can't be great either because it is HUGE.

The HBM2 memory does add to the cost, but as you pointed out the size of the die is the biggest reason for their incredible expense. Basically they are a design that was ahead of/ at the very limit of chip manufacturing capabilities when they were released.
I expect the consumer release Volta cards won't have any Tensor cores, so that will reduce the die size significantly, along with reductions in the number of CUDA cores for each model.Grant
Darwin NT

The HBM2 memory does add to the cost, but as you pointed out the size of the die is the biggest reason for their incredible expense. Basically they are a design that was ahead of/ at the very limit of chip manufacturing capabilities when they were released.
I expect the consumer release Volta cards won't have any Tensor cores, so that will reduce the die size significantly, along with reductions in the number of CUDA cores for each model.

I expect that too with the release of NV102 chipped consumer cards. Or will they really remask the design to eliminate the Tensor core subsystem... or just fuse off the failed parts of the NV100 chip with flaws in that subsystem. Building up new masks is expensive but could be the smart way to achieve that goal as then they could use standard size reticles. They are at the absolute limit with the reticle size now with the current design.Seti@Home classic workunits:20,676 CPU time:74,226 hours

We're not talking about the workstation cards. We're talking about the future cut-down NV102 consumer level cards that will be released by the AIB partners next year.Seti@Home classic workunits:20,676 CPU time:74,226 hours

Wanted to start a discussion here on the heels of Nvidia's release of the Titan V card based on the Volta architecture. I put my order in yesterday and it's going to be 3-5 days until I can report if things work as expected but wanted to ask here if anyone knows whether the client is fully optimized to work with Volta already?

hasherati, it's been a week now, you have a chance to see what those bad boys are capable of yet? :-)

I doubt he has received the card yet. And also question if any project or BOINC will understand the card yet even if the latest drivers support it.Seti@Home classic workunits:20,676 CPU time:74,226 hours

There's an article on it I read today. For Single precision, it's maybe marginally faster than a 1080Ti, For gaming, why bother. For Double precision, it ROCKS

If you are a developer or coder that can utilize double precision compute, however, the Titan V looks like a must-have product. That's a touch thing to type out for anything with a price tag in that range, but we are talking about a GPU that offers 10-14x better performance in some key performance metrics including N-body simulation, financial analysis, and shader-based compute.

They tested it out on Folding at home and it was outstanding

For cryptocurrency, it's faster than anything out there but at a price tag of $3000, doesn't make sense to use it.

Only thing I don't like is the memory speed, much lower than a 1080ti

Guess we need to see how it actually does, but you would think it would be outstanding for DP projects

Where was today's article? Yes, the memory problem is from HBM2 it seems. Maybe next year when the consumer cut-down version ships, it will use GDDR6 memory which has much better bandwidth than even GDDR5X. It should work great on MilkyWay which demands DP processing and the N-body tasks at Einstein should benefit too.Seti@Home classic workunits:20,676 CPU time:74,226 hours

Where was today's article? Yes, the memory problem is from HBM2 it seems. Maybe next year when the consumer cut-down version ships, it will use GDDR6 memory which has much better bandwidth than even GDDR5X. It should work great on MilkyWay which demands DP processing and the N-body tasks at Einstein should benefit too.

Sorry guys, been tied up and I must admit, the first thing it did was test Ethereum mining :) I'm hitting 77 MHps which is great, but agree, doesn't make a lot of sense to use a $3000 card only to make amount to $3 a day in coin. The other card is sitting in a box unopened. I'm at a large silicon valley tech firm and we had Nvidia over for an AI/ML talk this week. In talking with their sales rep, they think the first batch of Titan V's is going to sell out pretty quickly so am considering putting the other on on E-bay NIB once they sell out to make some of the $ back. It's a gamble but we'll see. Let me flip over to Boinc and see what this will do.