The usual practice is to install the operating system and your major applications on the SSD, and everything else on the HDD. But I suggest being careful when you set up your cache folders for things like RealFlow, so that the SSD doesn't get written on all the time.

Originally Posted by cgbeige:the new Mac Pro and other Ivy Bridge Xeons will be out soon if you can wait. They will be beasts for rendering with one 12-core CPU. But maybe that's overkill for your needs

And I have two of those Dell U2713HM screens. really nice

I don't know why you keep bringing up the new ivy-e Xs. Possibly pre-emptive Apple love?

They are 6core (12 is HT virtual cores) for the decently clocked ones, and the 2643, the one hexacore with semi decent frequency is 1550$ when you buy tray.
They might be beasts for preview rendering style but only if you have high cycle per die needs, because cycle per buck they are horrendous and close to the absolute worst possible CPU you sould slap into a case.

Given OP's budget of 3 to 3.5k I'd be more than hesitant recommending one spends more than half of it on what will be a CPU more than thrice as expensive as a top of the line i7 for maybe a 20-30% performance squeeze, if that.
On top of that if you want a single CPU and overclocking is an option they are simply not that good for rendering, and get absolutely smoked at single tasking by even a lowly i5k clocked respectably.
(unless the 2643 will be unlocked, in which case I'll retract that last paragraph, but I haven't found any indication either way).

Unless you mean the 2695, which is an actual 12 phys cores, but has a laughable base frequency and is 2.4k tray cost (street price practically the entire floor budget by OP). They are totally NOT meant to be good rendering CPUs, those are webfarm/computational centre CPUs for those with no license bounds and who have footprint and heat priorities.
They would be the only thing worse than the above mentioned 2643 bang for buck for a workstation, so I sincerely hope you weren't suggesting that.

I would at least wait to see the reviews. I'll be writing one of the new Mac Pro and comparing it to the dual socket versions of the current HP Z820 and Dell T5600 that I reviewed for Ars Technica. But I'm not saying "buy a Mac", I'm saying "wait to see what the CPU landscape looks like once these chips hit the market." But Apple will be the first out the door with these machines, likely within the next month since OS X Mavericks just went golden master and these chips were slated for mass production in September.

I'm not saying the CPUs won't be good, they will be, there's quite a few very good ideas in them actually.
I'm saying the 12 core ones are flat-out not meant for a workstation in first place, not that at their street price they would be even remotely interesting anyway, and the only decently clocked one is still unlikely at 3.5, no matter what else is thrown in there, to beat an ivy or haswell clocked at 4.7 in single tasking, and for rendering with the price of one single CPU you can literally build two full render donkeys.

They are simply not worth waiting for when your budget is three grands or less, they'd be a waste of money.
The CPUs themselves are pretty decent, just their bang per buck is simply not meant for this thread, it lies elsewhere.

Quote:[QUOTE=cgbeige]from Tom's Hardware:

"It’s looking like 3D modelers are going to seriously benefit from the potential that Ivy Bridge-EP offers to Apple’s Mac Pro, even in a single-socket configuration."

Ohe yeah, TH, because they surely have a clue on bang for buck while rolling in sponsor and bribe money and that's right below a rendering test

Originally Posted by cgbeige:while you could be right about the price, I don't get your reasoning here:

That's exactly what they are meant for. I need ECC RAM, 12 cores for rendering and a fast PCI-based SSD.

You want it in a render client, you really don't want it in a workstation that needs decent interface/viewport speed.
ECC is most important for 24/7 machines, a fast SSD can be used in any kind of machines these days.

__________________
- www.bonkers.de -The views expressed on this post are my personal opinions and do not represent the views of my employer.

I was talking about the PCI-based SSD since SATA3 is no longer adequate for the speeds that SSDs are reaching (over 6GB/s) but this is not exclusive to Xeons either now that I think about it. It's just built into the new Mac Pro and other Macs. Anyway, I'll stop there. I just think that the whole pro workstation scene is about to change so it would be good to see what that landscape looks like in a month.

But ECC isn't just important for servers – I've already had to send back a RAM chip for my gaming machine/render helper and RAM problems can be hard to diagnose since your system just behaves erratically. ECC at least maps out that defective area of the RAM and lets you use it without interrupting your work.

You can get PCIe based SSD. Yes, ECC can be helpfully, but from a statistical pov it is most useful with machines that are working 100% of the time. Yes, you can loose work due to RAM errors, but ECC only helps with small errors like you get from cosmic rays, defective memory sticks are often beyond ECC to correct.
Cheers
Björn

__________________
- www.bonkers.de -The views expressed on this post are my personal opinions and do not represent the views of my employer.

Originally Posted by cgbeige:while you could be right about the price, I don't get your reasoning here:

That's exactly what they are meant for. I need ECC RAM, 12 cores for rendering and a fast PCI-based SSD.

They aren't because they are targeted at computational centres.
They feature a squillion low temperature, low interference, low clock cores.
24 virtual cores at 2.2 or 2.4 Ghz for 2.5 grands tray price is horrible bang for buck for a workstation.

Core parallelism is far from scaling linearly, and in an interactive scenario, which is the focus of a workstation, there is far too much poorly threadable if not even single threaded bottlenecking for them to be good.

Animation, Rigging, bakes, a lot of I/O bound operations on caches are all strictly single threaded, and many more CPU bound operations scale horribly past the three or four threads range.

Even for rendering, depending on the engine, they might prove bad enough, depending on how much thread memory duplicity is necessary, in some cases even with generous cache if enough duplicity is present you will cache starve and thrash so frequently they will peform very, very poorly, and last if your engine is core and not node licensed, you are paying through the nose just to serve them.

The 3.5 hexacore CPUs are the ones intended for workstation use.
While the 24.5 to 28 theoretical Ghz spanning the cores might seem attractive, they will almost always (close to 100% in a workstation scenario) end up vastly inferior to the 21 theoretical Ghz spanning only 6. Only for massive thread pools and many VMs scenarios with low heat per VM and high yield per Watt the 12cores CPUs have some advantage.

As for PCI-E SSD, that's far from a Xeon exclusive, the bonus for the more recent ivy xeons is when you have retarded amounts of parallel SSD storage virtualized into multiple pools, which is again a farming scenario, not a workstation scenario (where for years now you have had PCI-E storage ala FX I/O available).
ECC is 100% pointless, the error prevention of it is purely for mission critical scenarios, the incidence of ECC recovering or preventing something is several decimal places away from significant. Unless you work outside the ionosphere during a solar storm

Edit: I did sort of forget to include the E5-2697V2, which is an actual 12 core, respectably clocked ivy, but with a tray price in excess of 2.6k, a street price that's likely to be well over four grands before GST, and a locked multiplier, it's unlikely anyone will care much, unless your productivity is so valuable to you that you're ok forking out 10k, 80% of which CPU, for a dual CPU workstation to see faster previews
If you meant those, then yeah, they are nice, but priced ridiculously and most likely targeted at crossroads nodes where you need as much short proximity oompf as poosible with no regard for component cost.

Edit2: Apparently the first 2697s on the street are retailing for 5 to 5.8k. I see now 4k was very optimistic

Originally Posted by cgbeige:But ECC isn't just important for servers – I've already had to send back a RAM chip for my gaming machine/render helper and RAM problems can be hard to diagnose since your system just behaves erratically. ECC at least maps out that defective area of the RAM and lets you use it without interrupting your work.

You misunderstand what ECC is for and how it works I think.
ECC doesn't map out defective areas of memory, ECC adds parity check and sum check bits and procedures in case a transistor gets inadvertently switched by external causes.
That's radiation (cosmic rays, local radioactivity, enormous local interference etc.).

None of it aplies to your workstation, except cosmic rays, which, to hit a transistor squarely and switch it from a 0 to a 1 exactly while it's 0 and just after it's been switched by a write and right before it's read, need to roll a bazillion faces dice to a bazillion exactly

When Maya or your rendering engine crashes for a memory related issue, that's usually a bad piece of hardware, a glitch in the OS, or plain bad programming/programming mistakes.
ECC can't magically figure out why an entry in a long binary number makes an app crash, it only ensures it doesn't get changed by external factors. External factors that under your desk, even with a 24/7 machine, occur about once in every forty years.

I don't really care much about ECC in a workstation. IMO it matters more in a server that first caches files into memory before sending them across a network as a client requests them. They'll then sit loaded into memory forever until they go stale when other requests eventually bump that data out of memory.

The xeon platform at least offers the capability to run a lot more memory. Compositors would love to run 256 or 512 gigs of ram, especially with 4k footage. Having more cores in an app like NUKE or AE will let you render more frames at the same time and eat up all that memory.

3D render nodes can run multiple simultaneous render jobs with all those cores if you have enough memory. That'll help eliminate diminishing returns with so many cores instead of having all cores on a single render job.

How much memory do those new macs max out at? 4 ram slots right? Maybe they'll support expensive 16 gig chips for a grand total of 64 gigs? That's not an improvement over the 2011 socket i7 platform from a year and a half ago and a step backwards in terms of modern xeon platforms that have up to 32 ram slots.

It's a shame the individual xeon core speeds, even with turbo mode are so low, but to me, big ram is one of the primary reasons for wanting a xeon platform as a workstation - aside from dual sockets.

Follow Us On:

The CGSociety

The CGSociety is the most respected and accessible global organization for creative digital artists. The CGS supports artists at every level by offering a range of services to connect, inform, educate and promote digital artists worldwide. More about us on TheArtSociety.com