[Findit] Attempt to fix  'Exceeded soft memory limit' error when updating flake counts

Had tried to fix this by using fetch_page, but didn't work. According to some reseach, it might related
to fetch_page using some in_memory cacheing.

clear cache before each fetch seems to be helpful.

Change-Id: Ic0e4206bb315bb75d0e43770ba7f8c9283167b0e
Reviewed-on: https://chromium-review.googlesource.com/c/1412034
Reviewed-by: Shuotao Gao <stgao@chromium.org>
Commit-Queue: Chan Li <chanli@chromium.org>
Cr-Commit-Position: refs/heads/master@{#19985}
Cr-Mirrored-Commit: 6bd04a2c0f4f8a48df794d58a9d66705ba3653b1
diff --git a/appengine/findit/services/flake_detection/update_flake_counts_service.py b/appengine/findit/services/flake_detection/update_flake_counts_service.py
index 0316a04..9c9f464 100644
--- a/appengine/findit/services/flake_detection/update_flake_counts_service.py
+++ b/appengine/findit/services/flake_detection/update_flake_counts_service.py
@@ -139,6 +139,7 @@
   cursor = None
 
   while more:
+    ndb.get_context().clear_cache()
     flakes, cursor, more = Flake.query().filter(
         Flake.last_occurred_time > start_date).filter(
             Flake.flake_score_last_week == 0).fetch_page(
@@ -162,6 +163,7 @@
   cursor = None
 
   while more:
+    ndb.get_context().clear_cache()
     flakes, cursor, more = Flake.query().filter(
         Flake.flake_score_last_week > 0).fetch_page(
             100, start_cursor=cursor)