I'm not an expert on this stuff, but I saw odd things running two PMs at the same time. It seemed like there was more variation until I had both garmin's (edge 500 in my case) running the exact firmware version, set up exactly the same options (ie smart recording off), and making sure I zero'd out the quarq (sometimes several times during the ride).

The few times I had one PM paired to two head units, I didn't see any variation (both with the stages and the quarq).

As far as riding, I've been doing a mix of intervals, hill repeats, steady efforts and some groups rides which are all over the place. I generally watch 3sec and 30sec power and then compare avg power at the end of the ride.

are you riding with stages and quarq at the same time (transmitting to different devices)?

comparing average power for a ride, while nice, doesn't tell anyone very much.

it is a good sign that 3s power agrees (are you really able to compare during hard efforts where speed/power/cadence are changing?) -- but best is to look into the individual data points from each file once they have been time-synched. are you doing that? (maybe you are -- but you didn't say other than looking at averaged power (3s, 30s, AP, presumably all on the head units' displays).

uraqt wrote:

what does everybody think of this? from Neal Henderson twitter

Neal Henderson ‏@nealhendersonAnd my SRM with 2 different head units on at same time and found typical variations of 2-4% for 1 powermeter broadcasting to 2 head units.

kinda means that nothing is going to be "good" : (

C

love to see some actual data. i've used the same power meter to broadcast to multiple devices and while the files are not exact, they are very, very close. making a claim that there is significant variation and then not showing us evidence does not advance his argument.

How does Ant+ deAl with multiple head units? Is it a broadcast transmission where each head unit should see the same communication or is it a two way transmission where each head unit gets a separate piece if data? Because if its the latter, there's a potential race condition where the data is sampled at different times, which will obviously cause mismatches.

I put about 12k miles on my quarq last year and have a little less than 2k miles riding on both a stages unit (I was lucky enough to get early) along with the quarq. Anecdotal, but at this point I feel the Stages unit is as consistent as the quarq

With all respect, I can find similar, fuzzy statements about any power meter. DC Rainmaker did extremely careful measurements with multiple parallel comparisons.

With all respect, I can find similar, fuzzy statements about any power meter. DC Rainmaker did extremely careful measurements with multiple parallel comparisons.

I wasn't trying to make my post sound like a comprehensive test/review. Just an anecdotal experience from someone who has used the unit and is used to racing and training with a power meter. Personally if I were considering buying one, I wouldn't base my opinion on a few anonymous internet posts -or- the first preliminary evaluation. Anyway, it sounds like more reviews are pending...

We are doing our own testing. Our data looks a bit different. To be honest, ours looks better, and thanks to Chung and Anhalt's work we know what specific abnormality to look for (although we are also looking for others, or course). Will be a little while before we feel comfortable publishing anything, though.

Rainmaker does truly excellent work, but in this case I think it would have been worth trying additional units, and maybe another set of legs. Quite the death sentence to pronounce based on a single unit. It's possible that either our unit is better (it has brand new firmware) or my legs are more even (previous testing has indicated I'm within about 1.5% left to right, even well into a LT test.)

I agree it's only one unit and perhaps it was screwy but we could only analyze the unit that was sent. Stages sent out new firmware last week and said that it was in response to something in our analysis, though I don't have any idea what that something is. We've looked at multiple heads on one PM and there are tricks to making sure you're comparing apples to apples. For example, one thing I've seen is that one Garmin model can be slower in registering a change in speed or power than another Garmin model, and one Garmin model may have an occasional data drop compared to another Garmin model. What this means is that you shouldn't do second-by-second comparisons -- and we didn't. Note, btw, that it's easier to check this kind of thing on speed and distance rather than power. That's why I generally synch up *not on time* but *on distance as calculated by the speed.* That is, take a look at the recorded speed from different head units on the same PM or even different head units on different PMs. The speed will turn out to be very, very consistently recorded. If you do this, you'll see that occasionally one or another head unit will hiccup, either skipping a signal or else skipping a time marker. Ever notice that even if you do your darnedest to mark intervals on two head units as closely together as possible, when you get to the end of a ride on head unit will be off by 4 or 5 seconds compared to the other? You know you didn't hit the buttons 4 or 5 seconds apart. That's what I'm talking about. In either of these two situations, integrating the speed up to distance will show as a misregistration between the head units. A side effect is that if you don't synch up on distance but instead try to synch up on time, you can find a growing difference in speed and a corresponding growing difference in power.

So one thing I've been doing is averaging over many seconds -- but if the problem is not only misregistration but also a data drop (where power or speed goes to zero for a second or two) then you can still get an artifactual difference in speed or power or cadence or anything else you're trying to track. So one way to get around it is to snip out parts around a data drop; another is to do a robust smooth (like an odd-span median smooth) over the data before doing something like a kernel smooth. I don't usually bother doing the latter thing because it's a pain in the butt but I toss it out there if you have more patience than I.

I'll be interested in seeing your analysis. BTW, if you (or anyone else) think we did a crappy job with the analysis, Ray put the raw data files up -- there's a link in one of his comment replies.

Who is online

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum