I'm interested in this cpu family for a server. From reading the review, I get the impression that the A8-3850 would make for a good balance of very low power consumption when idle, with quad cores and enough oomph for when really necessary. The A6-3600 would probably be the best choice, providing it's still quad core.

I'm basically looking for the lowest idle power consumption cpu/motherboard combo that gives me the ability to run Hyper-V, quad cores, and one or two PCIe slots (1.0 x8 is fine). I don't care about the graphic capabilities, I just want low power for a machine that will spend 95%+ of the time doing next to nothing but needs to be powerful enough to do some fairly cpu intensive stuff (e.g. transcoding) when called for.

In terms of VMs, I intend to run a legacy install of WHS v1, WHS 2011, and Vyatta VC6.2 (a router/firewall). I need to see if I can get idle hard drives to spin down when running Hyper-V... I will be testing this with existing hardware before buying a new board and CPU. I currently run WHS v1 on a Celeron E1200, which is quite low power but not hyper-v capable, although I have a spare Pentium which is. I will be running however many drives the motherboard supports plus up to 28 via an Areca RAID card.

This ain't a server chip. Is there any reason you want to use it for a server?Some people have used AMD consumer CPUs for servers because they cost less than server CPUs but have some features like ECC that Intel stripped from its consumer quad-cores. But I don't know which AMD CPUs have the second VM extension nowadays. Intel has it on some consumer CPUs if you don't care for ECC. Or maybe AMD's consumer CPU support more RAM?

If you're gong to use a consumer CPU, you typically get lower power consumption with Intel nowadays. I haven't seen measurements for all the boards under the sun obviously but the low-power AMD stuff usually features underpowered dual-cores while you should be able to get about 20W of idle power consumption (not accounting for the drives or the RAID card obviously) with an Intel quad-core and a good amount of RAM.The A8-3850 CPU's idle power consumption is only low compared to a system with dedicated graphics.

Once again, this all makes perfect sense. The bandwidth increases by 20%. The frequency (ie instruction delay) also increases 20% (again, assuming the same per-clock timings).

My issue is this:All of those "red lights" should resolve at best 20% faster. There is some small overhead in dealing with a saturated bus (to put it in thread parlance, you have to clear the pipeline, swap the registers, stack considerations, etc). But in real-world computing, the time dedicated to a given operation on a bus (or in terms of time-multiplexed channel schemes, the time allotted to "reserving a spot", which is really not even needed in the two controller case), is relatively small compared to the throughput time (ie a 5-hour stop light cycle would make the start/stop efficiencies irrelevant).

This breaks down if the engine is written in a way that very frequently forces the controller to change pages (ie every GPU operation depends on CPU result, then the reverse, which would obviously bottleneck discrete graphics setups as well). The cases where you might see a significant decrease in overhead would require the channel to be non-saturated, which would decrease the amount of bandwidth-related performance increase you would expect, and would (in almost any real-world case, though obviously not all, as the test shows) drop you below a 20% increase (because you went from saturated to unsaturated states, which increases efficiency, but not overall throughput).

The thing that is bothering me is that we are getting a better-than-ideal performance increase. If I told you I could double the efficiency of your car's engine, and then you get three times the mileage, something is up.

EDIT: Hmm, one possibility that occurred to me: I assumed that the same operations were performed on a per-frame basis. This is quite possibly not the case? IIRC, many games have a server frame rate (state rate?) which is fixed, so perhaps the total number of operations required per frame is not fixed. This would explain the performance increase.

This ain't a server chip. Is there any reason you want to use it for a server?.

23W idle power consumption (DC, according to the review) is pretty good. Intel chips systems go down that low, but only dual core parts AFAIK. If you know of a quad core intel that does less idle then please let me know. I guess the mobile parts do, but unless there's an ATX motherboard available then mobile CPUs are not an option...

It has AMD-V, which is the equivalent of Intel-VT, so should be Hyper-V compatible.

I don't need ECC support in a home server! Quad core and 8GB RAM will be good enough, maybe 4GB at a pinch!

The RAID card is about 10W, and with typical power supply inefficiency at low loads, I'll be very happy to get 40-45W idle power consumption at the AC socket.

23W idle power consumption (DC, according to the review) is pretty good. Intel chips systems go down that low, but only dual core parts AFAIK.

No. The quad-cores go a good bit lower. Please read the SPCR review this article is discussing.

aitor wrote:

It has AMD-V, which is the equivalent of Intel-VT, so should be Hyper-V compatible.

I'm talking about the second extension (AMD-Vi/VT-d), for I/O performance (apparently you intend to do a lot of I/O). Only some Atoms don't have the one you're talking about.

aitor wrote:

I don't need ECC support in a home server!

The need depends on the application, not the location. And the added relative cost decreases with the amount of drives.

aitor wrote:

The RAID card is about 10W

But the drives are going to eat more (a lot more if you max out!). So I wouldn't count on 40W at the socket including the drives and I would count on less excluding the drives as the inefficiency will be lower due to the drive's load.

No. The quad-cores go a good bit lower. Please read the SPCR review this article is discussing.

My bad, I thought i5 = dual core, so assumed i5-2500K was dual core... That's good. So it will come down to price then (cpu+mobo+ram). Although the lower-end AMDs (e.g. A6-3600 ) might do even better, but they aren't available/reviewed yet as far as I can see.

i/o load will be light, both in terms of xfer rates and IOPS, and I don't think Hyper-V supports AMD-Vi/VT-d yet anyway. Light = 1 Gbit/sec ethernet, which I can saturate easily with my existing system (celeron + same Areca RAID controller). I am not expecting this little system to compete with my work SAN (which can saturate 10Gbit/sec iSCSI!!).

Drives: I'll be using mostly Samsung F4 2TB drives which are 5W idle. I'm hoping that hyper-v will allow them to be spun down while idle, so it won't be 5W x 28 drives at idle - and I'll be testing this before making any purchases. Windows 2008 R2 (With hyper-V installed) still gives you the option to spin down idle disks, but I don't know if it will actually spin them down when they are mounted by VMs. If it doesn't then the whole project is off and I will stick with my current set up, which saves power by using the LightsOut add-in - sends the server into hibernate when there are no clients. Hyper-V does not support hibernate/sleep so I'm reliant on drive spin-down working.

Yeah, for serving media files there's no point in getting a highly reliable piece of kit. Worst case, you restore them from backups.And you don't need anything sophisticated to saturate a gigE link, true. Transcoding is going to be your biggest load that's going to be limited by the CPU. Any I/O limitations will come from random access to the drives themselves, a problem which can only be fixed with caching.

i5 and such are marketing designations with no consistent real-world relevance. Intel makes it pointlessly difficult to know what you're buying so I use Wikipedia to decide which Intels to get!

If you look at SPCR's tables, you'll see that power efficiency can vary a lot. Some drives eat 4W when parked!

The thing that is bothering me is that we are getting a better-than-ideal performance increase.

I do find this general talk interesting but I'm not going to do the math or research to give specific stats on the cores involved to the level to explain the scaling. You'd be better served to go to semiaccurate.com or some other processor design related forum to discuss it in that detail.

No offense intended, I'm willing to read up on it enough to know it isn't magic when something scales at better than 100% and isn't impossible for it to scale negatively, not at all, slightly, or outrageously. After knowing that not all outcomes are easily predictable I just try it and see what the real results are. I might guess beforehand but I won't worry too much if I'm off in my prediction.

In short I'm not sure why you are so bothered by it. Don't worry, be happy.

If you are advanced the reference below may sound condescending but I offer it only in the case that someone reading this doesn't know how nonlinear performance increases could occur.

maybe I left a term or two out but if you end up googling these and reading you'll bump into anything I left out.

There are many things done in a CPU or GPU that can create severe penalties and conversely exceptional increases in performance if those penalties are avoided.

Trying to explain why that is requires analyzing the work and the capabilities of the workers. Every scenario is different.

_________________.Please put a country in your profile if you haven't already.This site is international but I'll assume you are in the US if you don't tell me otherwise.RAID levels thread http://www.silentpcreview.com/forums/viewtopic.php?p=388987

In short I'm not sure why you are so bothered by it. Don't worry, be happy.

Curious might be a more accurate term . I'm just not used to "exotic" behavior from memory timing. Most of the usual suspects (per your list) are predetermined by code, compiler, ISA, hardware, etc. Network theory is applicable to CPU/GPU/memory interactions, and I'm not exactly up to date on the interface side of things (PCIwhat?), but the 50% performance increase on the one test is just downright surprising . Unpredictable things happen, but they are almost never in your favor!

I'm actually pretty happy with varying per-frame computation idea, though. I have not had time to do a lot of reading, but fixed server(/backend) update rates are apparently pretty common. So in a case where bandwidth was at a premium, all gains might be able to go to the graphics.

The whole llano series implements what is effectively variable TDP. To do this they throttle the CPU and/or GPU based on available headroom in the total TDP for the APU combo. What if the APU is slowing down the GPU to match the bandwidth of the ram (by way of noticing idle time when it stalls) and thus save power? That could create some very small latencies for speed switching and/or just make the GPU less responsive to greater demands.

I honestly don't know if this and/or one of twenty other ideas in my head explain the >20% increase when upping the ram bandwidth by ~20%. Feel free to add any other oddball guesses you like and maybe one day an AMD employee will tell us how many have any validity.

_________________.Please put a country in your profile if you haven't already.This site is international but I'll assume you are in the US if you don't tell me otherwise.RAID levels thread http://www.silentpcreview.com/forums/viewtopic.php?p=388987

Interesting as the concept is (CPU/GPU combined)I'd be ok about building one for someone if it met their needs (and it's quite good in that respect overall balance decent though hardly mind blowing CPU performance good 3d too)It's worth remembering Socket FM1 is not much more than a transitional socket and will be updated to FM2 next year. If this is backwards compatible with these processors on FM1 we'll have to see.

Socket AM3+ is also a short term socket move merely a minor update of AM3..but the critical difference is, if you're buidling to pack a punch aka enough grunt for 3/4 years or so..AM3/AM3+ is the only way to go for AMD builds.

I can't see a lot of processors getting made for FM1 where as AM3 has tons of choices add the new FX ones and you're good to go.

Both FM1 and AM3+ are moving to FM2 next year. To be blunt I think AMD has made an error here because they should have just made a unifying socket for both of these processors now rather than stop gap ones. I'd not be overly concerned about it but I most def would not lay down any kind of semi serious cash for a higher end board for either platform. For FM1 I would go more budget end.

Lawrence: One thing I found interesting was the GPU performance graphics. Comparing the integrated Intel HD against the APU showed almost identical numbers. Pretty impressive that AMD was able to match Intel's performance.

One thing I get a real kick out of is that most people don't understand what market this chip is aimed at. It's certainly not aimed at the performance/gaming market but the business/budget markets. Keep in mind that most of the back planes have all 4 video connections (VGA/DVI/HDMI/Display Port) where as most of the mid-high end video cards are limited to an hdmi/display port connector. That's an important issue because most people don't tend to upgrade monitors until they actually fail, meaning there are lots of them still using VGA connectors.

Heck it's even difficult to get a new monitor with anything other then VGA port, so the insistence of Nvidia and AMD on obseleting them is actually cutting their market by large margin. For example, I recently bought a new monitor - an HP 23 inch 1080 unit. Guess what, it's only got VGA and DVI so the HDMI port on my 1 year old Radeon isn't useful. In fact, when I bought the card I specifically got the model with a VGA port as my then current display only had a VGA connector and that's where many businesses are today. They don't care about DVI/HDMI or even display port for their minions. This means VGA still rules the desktop systems and that's why the Fusion spec includes a VGA connector. Business needs.

Lawrence: One thing I found interesting was the GPU performance graphics. Comparing the integrated Intel HD against the APU showed almost identical numbers. Pretty impressive that AMD was able to match Intel's performance.

One thing I get a real kick out of is that most people don't understand what market this chip is aimed at. It's certainly not aimed at the performance/gaming market but the business/budget markets. Keep in mind that most of the back planes have all 4 video connections (VGA/DVI/HDMI/Display Port) where as most of the mid-high end video cards are limited to an hdmi/display port connector. That's an important issue because most people don't tend to upgrade monitors until they actually fail, meaning there are lots of them still using VGA connectors.

Heck it's even difficult to get a new monitor with anything other then VGA port, so the insistence of Nvidia and AMD on obseleting them is actually cutting their market by large margin. For example, I recently bought a new monitor - an HP 23 inch 1080 unit. Guess what, it's only got VGA and DVI so the HDMI port on my 1 year old Radeon isn't useful. In fact, when I bought the card I specifically got the model with a VGA port as my then current display only had a VGA connector and that's where many businesses are today. They don't care about DVI/HDMI or even display port for their minions. This means VGA still rules the desktop systems and that's why the Fusion spec includes a VGA connector. Business needs.

The HDMI ports are mostly for HTPC use, not monitors - very few monitors have HDMI, as you've noted. I think you're overestimating the number of VGA-only monitors out there, though. They certainly exist, but even my very cheap three-year-old Westinghouse I use for work has DVI, and I've barely seen a VGA-only LCD in the last four years. DP is a fairly rare input outside of the high end monitors, but DVI is very common.

As a number of other posts relate, the APU is very sensitive to memory speeds and for those questioning why they didnt' check with faster memory, benchmarks have already been done by others. Extreme Tech did an article with benchmarks on the impact memory speed has on the APU's plus there's always our friend google to find more. Lots of them out there.

Who is online

Users browsing this forum: No registered users and 2 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum