Fix epsilon calculation for large-double comparisons

My whole life has been leading up to this bug fix.

TimeTicks.FromQPCValue was failing on some machines and this turned out
to be caused by a bad epsilon value when comparing two floating-point
numbers. I have written some of the reference pages about the perils of
floating-point comparisons so this bug really spoke to me:

The problem is that when testing with a QPC value of int64_t max and
a QPC frequency of 2.149252 MHz the result is around 4.29e18, and at
that magnitude the precision of a double is 512.0. The test was using an
epsilon of 1.0 for its EXPECT_NEAR comparison and this epsilon is
meaningless for numbers of that magnitude - either the doubles will be
identical or they will differ by a multiple of 512.0.

The real-life implications of this bug are that if you run Chrome on a
machine with an uptime of 136 millennia then if you store
TimeTicks::FromQPCValue in a double you should expect less than half a
millisecond of precision. I guess I should update this article to warn
about the risks of using double:

The fix is to calculate the actual minimum epsilon at the magnitude of
the numbers and use the maximum of that and 1.0 as the epsilon parameter

I have a tentative fix to DoubleNearPredFormat so that EXPECT_NEAR will
fail if passed an epsilon value that is meaninglessly small - this change
would have detected the bad test:

Bug: 786046
Change-Id: I92ee56309a0cab754dee97e11651ae12547a348e
Commit-Queue: Bruce Dawson <>
Reviewed-by: Yuri Wiitala <>
Cr-Commit-Position: refs/heads/master@{#522894}
1 file changed