Fix epsilon calculation for large-double comparisons

My whole life has been leading up to this bug fix.

TimeTicks.FromQPCValue was failing on some machines and this turned out
to be caused by a bad epsilon value when comparing two floating-point
numbers. I have written some of the reference pages about the perils of
floating-point comparisons so this bug really spoke to me:
https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/

The problem is that when testing with a QPC value of int64_t max and
a QPC frequency of 2.149252 MHz the result is around 4.29e18, and at
that magnitude the precision of a double is 512.0. The test was using an
epsilon of 1.0 for its EXPECT_NEAR comparison and this epsilon is
meaningless for numbers of that magnitude - either the doubles will be
identical or they will differ by a multiple of 512.0.

The real-life implications of this bug are that if you run Chrome on a
machine with an uptime of 136 millennia then if you store
TimeTicks::FromQPCValue in a double you should expect less than half a
millisecond of precision. I guess I should update this article to warn
about the risks of using double:
https://randomascii.wordpress.com/2012/02/13/dont-store-that-in-a-float/

The fix is to calculate the actual minimum epsilon at the magnitude of
the numbers and use the maximum of that and 1.0 as the epsilon parameter
to EXPECT_NEAR.

I have a tentative fix to DoubleNearPredFormat so that EXPECT_NEAR will
fail if passed an epsilon value that is meaninglessly small - this change
would have detected the bad test:
https://github.com/google/googletest/issues/1350

Bug: 786046
Change-Id: I92ee56309a0cab754dee97e11651ae12547a348e
Reviewed-on: https://chromium-review.googlesource.com/816053
Commit-Queue: Bruce Dawson <brucedawson@chromium.org>
Reviewed-by: Yuri Wiitala <miu@chromium.org>
Cr-Commit-Position: refs/heads/master@{#522894}
diff --git a/base/time/time_win_unittest.cc b/base/time/time_win_unittest.cc
index d8ee7e38..62143f5 100644
--- a/base/time/time_win_unittest.cc
+++ b/base/time/time_win_unittest.cc
@@ -277,9 +277,9 @@
 
   // Test that the conversions using FromQPCValue() match those computed here
   // using simple floating-point arithmetic.  The floating-point math provides
-  // enough precision to confirm the implementation is correct to the
-  // microsecond for all |test_cases| (though it would be insufficient to
-  // confirm many "very large" tick values which are not being tested here).
+  // enough precision for all reasonable values to confirm that the
+  // implementation is correct to the microsecond, and for "very large" values
+  // it confirms that the answer is very close to correct.
   for (int64_t ticks : test_cases) {
     const double expected_microseconds_since_origin =
         (static_cast<double>(ticks) * Time::kMicrosecondsPerSecond) /
@@ -287,9 +287,18 @@
     const TimeTicks converted_value = TimeTicks::FromQPCValue(ticks);
     const double converted_microseconds_since_origin =
         static_cast<double>((converted_value - TimeTicks()).InMicroseconds());
+    // When we test with very large numbers we end up in a range where adjacent
+    // double values are far apart - 512.0 apart in one test failure. In that
+    // situation it makes no sense for our epsilon to be 1.0 - it should be
+    // the difference between adjacent doubles.
+    double epsilon = nextafter(expected_microseconds_since_origin, INFINITY) -
+                     expected_microseconds_since_origin;
+    // Epsilon must be at least 1.0 because converted_microseconds_since_origin
+    // comes from an integral value and the rounding is not perfect.
+    if (epsilon < 1.0)
+      epsilon = 1.0;
     EXPECT_NEAR(expected_microseconds_since_origin,
-                converted_microseconds_since_origin,
-                1.0)
+                converted_microseconds_since_origin, epsilon)
         << "ticks=" << ticks << ", to be converted via logic path: "
         << (ticks < Time::kQPCOverflowThreshold ? "FAST" : "SAFE");
   }