I'm looking to build a new PC for work, really spec'ed run three pieces of software: ArcGIS, USGS's ISIS, and some custom crater-detection software. For the latter two, I already need to have a fast processor and max out the RAM. What I don't need for either is a GPU.

My question is, what use does Arc make of the GPU? I couldn't really tell anything from ESRI's system requirements page. For example, does it do any hardware acceleration? Use CUDA? Or would the on-chip GPU run Arc just as well as a $500 GPU?

ArcGIS Desktop makes limited use of hardware accelerated graphics -- improves scrolling of base maps for ArcMap and some OpenGL shader work for 3D renderings with ArcScene and ArcGlobe. No CUDA or OpenCL functions. An integrated HD3000 GPU on an Intel i7-2600 would be perfectly adequate. Put the extra money into additional RAM and SSD storage for disk I/O.

Then if you find you need to tackle Python coding of OpenCL or CUDA for NumPy hypercube processing beyond on CPU capabilities of ISIS or IDL, you could add-in a low priced AMD or Nvidia discrete PCIe graphics card to develop image processing workflows with.

Thanks for the response, Stuart. It's actually different from one I got from someone else. I was wondering if you could by chance comment on what they said?

I'm actually a systems analyst at a company that has major ArcGIS and Autocad teams / practices and deal with this on a daily basis. Those specs look pretty good but don't try OC'ing your CPU because software like ArcGIS is known for lockups, crashes and overall system deterioration. ... you want a Nvidia Quadro card or an ATI FirePRO card, you may even want to get dual graphics cards since ArcGIS can make use of them (comes in handy with the huge 3d or high definition images). Both of the cards I listed are for high end workstations (made for CAD and arcGIS) and even have specific drivers to increase performance with both pieces of software. I would also swap out the ram you have for unbuffered ECC RAM, you need to make sure the motherboard you have supports it but it definitely can make a world of a difference when you're dealing with massive data sets and projects.

Tip of Advice: Don't store your project files or any references on a network or external drive if you are going to be working on them, it can cause GIS to slow down to a crawl in no time.

Second Tip: Using an SSD for a boot drive is great but unless you know what you're doing I would avoid it, if you end up having any kind of GIS temp files being saved to the SSD it could kill it in a week or two. You would probably be better off buying 3 or 4 medium sized SATA hard drives and creating a raid 5 with them. That would give you amazing boot/load times (equal to or better than a SSD) and redundancy.

The advice isn't purely GPU-centric as I was asking about all the specs, but perhaps you can still comment? He hasn't responded to my clarification that I really don't use any of the 3D stuff for Arc so I'm wondering if that has to do with the different suggestions between the two of you.

Hmmm, I think you'll notice that they're also supporting AutoDesk CAD which runs highly optimized with 64-bit OS support and GPU hardware support with suitable custom GPU drivers. Similar would be configuration to support other highly graphic intensive image processing, engineering and visual simulation programs from the likes of Adobe, Abaqus, ERDAS, NAVIS Works, IHS, Schlumberger, etc. Rendering and hardware based acceleration of Esri ArcGIS Desktop 3D Analyst, ArcGlobe, ArcScene are not in the same league!

For Esri ArcGIS Desktop -- there is limited use of hardware acceleration in ArcMap, and that is all OpenGL based. Primarily accelerated OpenGL shaders and texture support for raster buffering (for smooth scrolling) and some 3D wireframe fills. The imbedded HD3000 GPU of an Intel i7 2600K/2700K or P3000 of Xeon E3 12x5 would provide more than acceptable ArcGIS graphics performance.

For your use case-shared file system between Linux and Windows running ISIS and ArcGIS respectively, I would still suggest that your put the resources into your storage system I/O. I prefer use of several small 6GBs SATA III SSDs for OS, application and temp/swap storage (your FileGeodatabase Scratch.gdb and Default.gdb/project Specific .gdb's for example) with larger SATA HDD for bulk data stores as either RAID or JBOD, although I prefer JBOD. Use TRIM and over provisioning for the SSDs and when exhaustion of an SSD actually occurs--replace it, you'll have plenty of warning.

Thanks again for your reply and thoughts on this. You're getting into another realm here with HDD/SSD and other things. I actually have another thread going on other forums asking for specific advice about that. But ... since you're knowledgable in this area and I have received little response on the generic "build me a PC!" forums, if I could borrow your expertise for the following, I would be greatly appreciative. This is copied from one of those for�?, so it refers to some stuff "above" where I basically summarized the software I want to run to the generic audience. Note that I will be maxing out the RAM -- if I get an LGA 1155 board, I'll be putting in 32 GB; if I get an LGA 2011 board, I'll be putting in 64 GB, and it'll be DDR3 @1600 MHz.

Hard Drives: Based on my needs above, it has been suggested to me that I should RAID the disks. It was initially suggested I get a small SSD for the OS and software and get a HDD for data (1 TB should be okay for that at the moment). But, if I RAID, and it was suggested I RAID 10, then I'd need 4 drives for the RAID 10 plus whatever I do for OS/System (such as RAID 1, needing 2 drives). The reason I went into detail in the applications in terms of disk I/O is maybe you folks have an idea of whether the SSD would really shine here or RAID would be sufficient or even if RAID would be "needed." If I were to go with a RAID 10 HDD and RAID 1 SSD, though, we're talking around $850 which is almost half of what I want to spend on the entire system. If I don't do RAID, I'll get an external drive on which to do backups.

The drives I'm looking at are WD Caviar Black 1TB, and Crucial M4 128GB. Though NewEgg seems to have a deal every few days on other SSDs -- there's a 120GB on sale right now for $1/GB that expires in 24 hrs.

CPU: At the moment, I'm looking at either the Intel i7-3770K (Ivy Bridge, to be released at the end of the month), or the i7-3820. Both would allow full usage of DDR3 RAM @1600 MHz. On the former, I was initially going to go with the i7-2600K, but for $10 more could eek out the performance increase of the Ivy Bridge. On the latter, the benefit I see is the ability to use a motherboard that can have 64 GB of RAM, and it addresses the RAM with a quad-channel memory controller as opposed to a dual-channel. However, the benchmarks I've seen with the 2600K versus the 3820 seem to be mixed. But maybe that means that the 3770K will thus be better than the 3820. But then of course, within a year or so, Intel should release an LGA 2011 version of the Ivy Bridge chip that'll be the successor to the 3820 and be better than the 3770K and will have the benefits of the 64 GB RAM addressing, quad-channel controller, etc. But on the other hand, I don't know how much the software that's not designed specifically to take advantage of such architecture could take advantage of it.

The cost for the two is comparable, and of course less than the i7-3930K ... which would be nice, but I don't think my code can really take advantage of the extra cores for the double price of the 3930K.

Ideas?

Motherboard: It's a pain that this is dependent upon the CPU type, but 'tis. IF I go with an LGA 1155 chip (such as the i7-3770K), then I've been looking at the Asus Maximumus IV Extreme or the ASRock Z68 Extreme7 Gen3. These seem to offer the most expandability. If I go with an LGA 2011 chip, for roughly the same prices, I'm looking at the Asus P9X79 Pro (only one that offers SSD caching), ASRock X79 Extreme6/GB, or possibly the ASRock X79 Extreme9. From my understanding, the main differences motherboard-wise between the 1155 and 2011 are dual- vs quad-channel memory controllers, support of 32 GB vs 64 GB of RAM, native PCIe2 vs PCIe3, and theoretically more SATA expandability with the 2011 but that hasn't really been realized with what's on the market in this price range. Ideas on this one?

Again Stuart, thanks for the help so far, and if you have any insight into any of the above points, it'd be appreciated ... and anyone else happening to read this. :)

Someone has to pay for this box, so in addition to performance--cost effectiveness should enter into the calculus.

That said, your use case is highly CPU and I/O responsive--graphics performance is secondary. The expanded addressable RAM of the LGA2011 socketed Intel CPUs and a larger on die L3 cache will give you the best functional system. I'd stay away from the latest Ivy Bridge based CPUs as they are smaller die with less space for L3 cache. Also, they'll demand a premium as they are released.

As the CPU for a 1P LGA2011 MB, look at the Intel Sandy Bridge i7-3820. 3.6Ghz (3.8Ghz Turbo), 10MB L3 cache, 130W, DDR3-1600 Quad channel. Looks to be a the best price point (~$300).No on die GPU, so add a low end PCIe 2.0 x16, nVidia or ATI GPU based graphics card--select one with the video connectors you prefer--DVI, Video Port and perhaps HDMI.

As to disk I/IO, I'd probably go with a couple of small SSDs attached to MB SATA III 6GBs ports--I prefer Samsung, but have used Kingston and Crucial Sandforce based SSDs, Intel's 520 series might be a good alternative to Samsung 830s. Any MLC based drive is going to be fine, no need for SLC. You can expect at least 4 years of problem free operation. Using the BIOS controlled on MB SATA ports will simplify a multiboot configuration (or using VMs) if that is the way you want to set up OS.

For bulk storage, decide if you want an external disk array or want to run internal off the tower's power supply. If internal, size your power supply to support the load. Don't skimp on the power supply! You will need a PCIe 2.0 x4, x8 or x16 SATA 3.0 controller offering a good size buffer. The controller software will let you set up RAID in various configurations. No opinion there as I think internal RAID is kind of pointless--so I'd probably run as JBOD doing incremental BU from drive to drive. If an external RAID, look to use an on MB eSATA 3.0 port or use an PCIe slot for an eSATA 3.0 card. Alternatively, depending on the MB and external array you choose--you might consider USB 3.0 for connecting the external array as Direct Attached Storage. Avoid any iSCSI configuration!

If you have the money to burn, then a 2P LGA2011 socketed server/workstation MB, and Xeon i5 2600 series CPUs might be the way to go. But you'd be looking at ~$500 for MB, and $1100 each per CPU (at the Core speed you'll want), and then also the extra memory. Again you'd need a low end PCIe 2.0 x16 discrete graphics card. But you'll be able to double or quadruple your available memory. Won't be too long before Dell and HP pick up Xeon i5 for their workstation lines if you'd prefer to go mainstream and not roll your own. Or even look at Apple who will be refreshing the Mac Pro line soon if your budget allows ;-)

Obviously you'll have to structure your workflow to compensate for 32-bit only ArcGIS and 32-bit Python 2.6 (or 2.7 at Desktop 10.1)--you'll need to modify the 32-bit Python executables to run LargeAddressAware. But you can run multiple ArcGIS instances and out of process ArcPy threads in memory. Also, your Linux based ISIS work, and any image processing using 64-bit OSGeo libraries will all happily use available memory.

Thanks again, Stuart. Based on your input and input from folks on a few other boards (and reading the Stack Exchange thread), the flavor of the day in terms of the PC build I'm thinking of, coming in at just over the $2k targeted budget, is: http://pcpartpicker.com/p/6LXC .

That specific motherboard allows SSD caching on two of its SATA3 ports, so the planned 60GB SSD would be there, and the three 1TB drives would be in a RAID 5; 120GB SSD is for boot, software, and either a Linux partition or the Linux virtualization software. The RAID was suggested to me because it is effectively an automatic backup (so I don't need to worry about that at all), and read/write speeds would be increased -- esp. read. I was initially going for a 128GB SSD for caching, but then was told Intel chips can only handle 60GB. If I had an extra $few k to throw around, I'd probably go to all SSD, but I don't think that's worth it at this point and I don't have that kind of money for this.

The GPU is sort of middle-low-end, I think. 850W power supply.

I figure this'll also give me room to grow, if needed, or upgrade easily. Such as, in a year or two, going to the Ivy Bridge equivalent chip (depending on benchmarks). Or upgrading the GPU if I end up doing 3D stuff. Def. upgrading storage capabilities is there.

Thanks, I'm still tweaking a bit, but now I'm thinking I may actually purchase this weekend since I'm not waiting for Ivy Bridge. My ex has offered to come help put it together. :)

And yeah, it was a guy on another forum who had posted an example build from that site and I was like, "Wow!" Very nice that it searches over a dozen sites for the cheapest even while including shipping and rebates.

I've been benchmarking for awhile today. I've been using the free 32-bit version of Geekbench, Cinebench, and doing some real-world test of some of my Igor code. The Geekbench score on my desktop was 40% on the Windows side versus Mac, on the laptop it was 60%. Cinebench CPU score was about 50% in both. But Igor must've been originally Windows code and ported to Mac -- it was consistently about 25% faster on the Windows side, even in virtualization in 32-bit mode, than on the Mac side. Meaning I saw a 50% increase in speed in Windows on the laptop versus Mac on the desktop. I'm kinda looking forward to what it'll be on a real Windows machine now.

Well, I got the computer built, and three days later two of the four programs I want to run installed and working and both Windows and Linux at least talking with the RAID array. This morning I fought with Arc 9.x licensing for an hour before sending an e-mail for help.

Benchmark-wise, not overclocked, the new computer is 55% faster when running Geekbench in free 32-bit mode, 11% faster in Cinebench's CPU test under 64-bit, 74% faster in its OpenGL test. PCMark testing was 84% faster. Real-world benchmark, as in the actual applications I run, had Igor executing my code ~78% faster than Windows virtualized on the Mac, while it's 3-3.5x faster than Igor running native on the Mac (yeah - Igor runs faster under VIRTUALIZED Windows than native on the Mac, which I didn't know until 2 weeks ago). ISIS processed a medium-sized image in about 9 minutes on the new computer and about 11 minutes on my laptop (much slower on the desktop). I haven't run a larger image to see if there's more or less of a speedup.

So, new computer is definitely faster. But I've been fighting with it, Windows, Linux, and now Arc to get things running for the past 5 days and am fairly frustrated and sleep-deprived as a result. ;)

I'll run these when my 21-hr job finishes on the Linux side. :) I got the crater detection code on Friday and spent 5 hours this morning without any Linux/C knowledge (well, 0+epsilon knowledge) and was able to convert it from 32-bit to 64 so it runs on the machine. Ran some benchmarks on it -- it's only around 18% faster than on my laptop, but it's quite nice to see two processes running that are each using up 19GB of physical RAM. I need to run 6 of these (3 model craters, 2 halves of an image because JPEG file format doesn't support >64,000 pixels in height or width), though I'm running two instances. I probably could run three, but that would bring me down to ~3GB of RAM left and the system is using 4 ... don't want to start using up swap disk space on this.

So even though the crater code is only 15% faster, the fact that I can run it on a 50% size image chopped in half versus a 50% size image chopped into 6 or so makes things much easier book-keeping-wise.

So now everything is working but Arc10. Need to decide on a VM, I guess, to run that for the 5% of the time I need it instead of 9.3.

Re: Benchmarking with 7-zip - does it say how long it takes or how fast it's going? I ask because I didn't really see anything on the site other than "this is a compression tool."

I'm also trying to decide if I should try running the crater code with the CPU overclocked. I was able to easily overclock by 25% and I was consistently getting 20-25% faster benchmarks on the Windows side. With two jobs each estimated to take 21-22 hours and needing to do this for many different images, I'm wondering if shaving 4 hours off that is worth overclocking risks.

The 7 Zip benchmark compresses and decompress a fixed size "dictionary", the size of the dictionary can be increased or reduced. And the number of CPU threads can be decreased to 1 or increased to twice the physical thread count available on your MB, so for your 4 core i7 3820 you can run the benchmark with 1 - 16 threads. Most published 7 Zip benchmarks will use the default 32MB dictionary with the default physical core count --4 in your case. You can evaluate your SMP results by increasing the number of threads, and evaluate your L3 cache and Disk IO by increasing the size of the dictionary. Once memory used for the benchmark exceeds physical, you've pushed it into your SSD page file. I haven't used it much, but the TrueCrypt program has a similar benchmark but it encrypts and decrypts an adjustable size "buffer".

So how is your cached SSD page file/swap space working? Have you run a session of CrystalDisk Mark?

As to overclocking, it is why you bought the Asus 2011 socket/X79 chipset MB rather than an Intel MB offering. Just monitor your temps and provide additional cooling the faster you push it. This review from thessdreview.com compares a 5.0 GHz overclocked i7 3820 against an i7 3930K on an ASRock X79 based MB of course their dual reservoir and radiator equipped water cooled CPU heat sink is awe inspiring--I want one under my desk :cool:

Alright, I ran the 7-zip thing and then found that it had the whole "benchmark" function. I ran it mostly on the default 32MB size though I also did a 128 just for fun. I ran on 1, 4, 8, and 16 threads. The best score was with 16 threads, but it was barely higher than the score with 8. The two numbers with the 16 threads were 2792, and at 767% of a processor ended up at 21402 MIPS. 8 threads was 2904 at 714% got me 20,634 MIPS. From [url=http://www.7-cpu.com/]this site[/url], that puts me at roughly the same as the Intel i7-2600K, which was the other chip I was looking at. I did not try this over-clocked.

I'm not really sure what's going on with my CrystalDisk Mark benchmarks. On my Mac under Windows XP on Parallels, I was getting sequential reads of around 500 and writes up to 1000 (that was with a 2000MB file and 50MB file, respectively). The 4k reads were around 22 and writes at 1-2. Meanwhile, the scores for my SSD on the Windows machine were around 170 read 120 write, and 20 read and 60 write at 4k. I should be seeing significantly better reads. Similarly, the RAID was getting scores that were around half what my disk was reporting on the Mac. So, really not sure what's going on there.

As for SSD caching, I apparently did not read the ASUS fine print. Your main drive needs to be a HDD and needs to be on the next port (SATA @3GB) to the SSD cache disk (also SATA @3GB). Since I'm not using a single HDD, and I have my main disk SSD hooked up to a SATA @6GB/sec port, I ended up using the 60GB SSD as a "scratch" disk for the codes I'm running on Linux - ISIS and the crater detection code - since it's a larger partition than what I gave Linux on the main disk. Kinda a bummer, but oh well.

Agree, something is not correct with your SSD configuration, you should be seeing closer to 500 MBs for your main SSD, and page file performance on the other SSD is probably also degraded. Attached screen shots are for a Samsung 830 installed on a 3 GBs SATA II only MB--at the 50MB and 2000MB.

Regards 7-Zip, those numbers on the stock clock are spot on. They'll improve of course when you overclock. A neat way you can use the benchmark is to see how your system performs once you push the calculations out of memory and into the page file. You'd do that with a larger dictionary and more threads--pushing your session size larger than physical memory, attached for an i7 with SSD and only 9 GB RAM. Note the change when size exceeds physical memory and page file IO ensues.

I'm also now running up against a strange Java issue. One of the programmers on one of my projects gave me a clustering code he wrote in Java (SO much better than my code). It runs in about 30 seconds on my desktop on a given file, 14.5 seconds on the laptop. But then on the new computer under both Windows and Linux -- both running the same Java version as my Mac, 1.6.0_31, 64-bit -- it takes 40-50 seconds.

I thought it might be something hardware related until I checked my version of the code. It took 47 seconds on my laptop, 27 on the new desktop under Windows. So, there's something about his code that's causing Java to slow down by a factor of 5-6x (3x relative to my laptop, but given that my code runs ~2x faster, I'm saying 5-6x). I asked him to look into it and that I could run whatever diagnostics or variations he sent me, but I don't know enough about it to really know what to do.

Since your JRE is a new installation expect it would be using defaults for memory -Xms and -Xmx, you might adjust those upward to see if the contributed program works at least as well as yours at that point--you certainly have the memory to hold a large JVM and keep the program out of swap or page file.

As to the SSD configuration, is likely going to be a BIOS setting but you may have to shuffle the SATA ports--I would spend time on the ASUS boards teasing out some configuration help there.