Would have been nice if he had done a dozen or so runs on each and given an average time
But then he’d have got tired, so the order of the tests might be a significant factor and conditions may have changed.
I’ve been riding a 26″ full-suss (Five) and a 29er HT (Solaris) for over a year now and have got loads of data on segments that have been ridden a dozen or more times on each bike. The tests obviously aren’t blind (I know what bike I’m riding) but since I wasn’t actually trying to compare the bikes when I did the rides they are as close as you’ll get. I’ve tried running some stats on the results and the only conclusion I can come to is that “some days I’m faster than others”.
There are smooth fireroad climbs where the HT is faster (as you’d probably expect), but other smooth fireroad climbs where the best time on the Five is almost 10% faster than the best time on the Solaris. There are rocky descents where the Five is faster, but others where the Solaris wins. Even trying more complicated tests using the whole data (rather than just which is faster) fails to show anything significant.
The bottom line is that the variance in my own performance is much higher than any difference between the bikes, so in order to get a significant result I’ll need a lot more data. That’s comparing two pretty different bikes. I’d hate to think how much data you’d need to detect a difference in just one component.