New servers inferior CPU performance

To my surprise, the performance of the new vps was much lower using exactly the same OS (FreeBSD 11) and settings. Recompiling the same project needed almost 30% more time; same when building other FreeBSD ports.

At first I assumed it was something random and destroying that instance I created a third and a fourth and a fifth one on different data centers (Amsterdam, Frankfurt) but the problem persists!

According to system info, CPU is different too!Old VPS is 2400Mhz ID=0x306c1 model=0x3c but in all new ones is 2394Mhz ID=0x306d2 model=0x3d

After further testing with other OS distros and benchtests I came to the conclusion that Vultr decided to use lower-performance processors in their new boxes.

Any idea what's going on with this really odd situation?I do like Vultr, but performance is supposed to increase over time not to decline!

Comments

@Athan I'm running FreeBSD on 5 different instances. What benchmarks did you use, and what results do you get? I'll post mine, see what's going on. - You shouldn't be seeing any major speed difference at all.

Did you install from the FreeBSD custom install iso, or use the vultr automated FreeBSD option?

From what I can gather, "0306c1" is 4th generation "Haswell" and "0306d2" is 5th generation "Broadwell". For the same CPU speeds etc. if anything the new CPU should be faster.

I've not had a chance to speedtest yet, but every CPU of mine is different! (Note the only 2 with the same speed and family/model/stepping have a different CPU string (which may or may not mean anything))

I'm puzzled, though, that the features and speed etc. of the reported CPU are basically the same - is it possible that this is due to some different resource setting on new KVM guests rather than the listed CPU itself?

I'm about to clone one of my original instances and compare the two, though at this stage I guess I'm only going to confirm your findings...

I snapshotted it, and used it to create 4 new $5.00 instances, in London, Paris, Seattle, and Sydney.

I ran the build of perl on each machine. slept 30 minutes, then repeated.

As with you, I got some results around 5 minutes per server, and one around 7.

So in summary:

It's not effecting all new instances, fortunately.It's also not related to the CPU (you'll see I got 3 new cpus and 1 older one on my new instances)

The only thing I've noticed that tallies with the slower operation is the slower one reports 2.394 Ghz, and the others 2.4Ghz - obviously that difference in itself isn't the issue, but it's the only thing I can see that is different. You may be able to see something here that I've missed!

I've had a host online since 1999 - back then they were dedicated, and indeed, a hell of a lot more expensive than they are now. And those hosts ($60+) were less powerful than a $5 vultr instance today.

If the new 2.4Ghz cpus are Broadwell-EP based E5-26xx v4, then depending on application/usage could be slower as you may have found the same thing discussed at https://www.webhostingtalk.com/showthread.php?t=1624698 that E5 26xx v4 L3 latency is almost double that of E5 26xx v1 with more L3 cache size though and that to better leverage the performance of E5-26xx v4, you would need to custom recompiling the application to better leverage newer cpus

also if you using source compilations as benchmarks be sure to take into account each VPS's disk i/o performance as it's pretty low (relatively) for Vultr VPSes compared to other VPS providers from my tests.

Yeah, as I posted, the newer CPUs are Broadwell, but this isn't what is causing the HUGE speed difference, unless there are some different broadwells in the mix... (my sydney broadwell instance is slower than my sydney haswell, but the other broadwells in the test are the same speed as the haswell)

I forgot to mention. All my first tests stemmed from a snapshot of my sydney instance - this was built 100% from source, with:

I have to agree with Jamie. It looks like a virtualization setup issue or CPU type and clock frequency are wrongly reported. Nothing else justifies such a huge linear difference in performance.Anyway, I've noticed that all new instances chronologically created after this forum thread are as fast as before.Maybe Vultr folks keep an eye on this forum...

Unfortunately the FreeBSD port of Unixbench on latest 2GB instances gives only 457, just a tad higher than a cheapo scaleway atom instance

Before Vultr latest virtualization changes, I was able to achieve 950 - 1080 using the same OS / UnixBench version and I'm pretty sure that owners of old 3.xx GHz instances could get even higher scores.

As far as I understand (and I may be wrong), the way Unixbench works is heavly dependet on compiler/OS etc. so can't really be used to accurately compare different OS's, but more appropriately, the same OS on different hardware.

I totally may be totally thinking of totally something totally else, though, totally!

Anyway, out of interest, I ran unixbench on 3 of my original instances. This time, it was just a single pass, and the instances - although quite light on resource use - were live. So, these results need to be taken with a pinch of salt, but however: