Connor raises an interesting question: Does the original source of software
being a student project correlate with superiority or inferiority of a tool
as compared to software sourced "officially?"

This is a question of fact likely to go unanswered in a rigorous sense.
Likely if the domain of competition is all student projects, then "official"
will win, but I suspect student projects that survived into common use have
the edge over all "official" software. MS-DOS, for example, was "official"
software, while the Dartmouth Time Sharing System was largely a student
project. A tiny number of people ever hacked DTSS.

As to sar and vmstat, I've gotten varied results over the years as to which
more accurately portrayed various values on which releases of which
operating systems. Unless you have an independent physical measure of an
operational componment that both vmstat and sar measure or way to establish
a known load and have both sar and vmstat report, you can't tell for sure.
Even then some of the measurements will get "Hiesenberged." Finally, at the
bottom line there is the frequency of flushing of kmem structures or their
analog, so you just really can't get around the problems of time slice
aggregations and averaging for some of the metrics being tracked by both
these tools.

Usually you can find release notes or bug reports for a specific combination
if one or the other is far enough off to be unreliable on a specific
combination of hardware and operating system. In lieu of such notes, if you
can reduce the problem to a particular system and release you can probably
run a few load tests to decide whether one or the other is better in your
situation.