Layout Test Contest

Why? Fix all test ordering issues so we can enable --order=random-seeded by default. Drive number of lines in test_expectations to 0 so it is easier to maintain. Win fame and prizes!

Who? Everyone! Yes, you can participate even if you‘ve never contributed to Blink before! In fact, this is the perfect way to get some Blink commits under your belt! You don’t need to be a Google employee either.

Dates: The contest will begin on 9/30 ~~and end on 10/20 at 11:59pm PST ~~ extended to 10/27 at 11:59pm PST!!! All code must be submitted for review during these dates to be eligible.

Scoring: Points will be awarded based on the tasks completed. Point values for each task are explained below. Each participant's score will be the sum of his or her points. See the leaderboard!

Winners & Prizes:

There are many chances to win! In addition to standard 1st, 2nd, 3rd Place, there are additional prizes:

  • Cheetah: First person to 250 points. Start off the contest with a bang!
  • Top Reviewer: Awarded to the participant with the highest number of points from code reviews only. Show off your code review skillz!
  • Best Yak Shave: Awarded to the participant with the most ridiculous patch, or series of patches, required to solve a “simple” problem. Show your perseverance!
  • Top Slacker: Awarded to the participant with the highest number of points gained, who starts AFTER October 14, 11:59pm PST.

Winners will select a prize from a prize category. Any prize can be exchanged for a “Get out of gardening free pass!” - Priceless!

1st Prize: Awarded to the participant with the highest number of points.

**Your choice of: **

Indoor Sky Diving Party for 12

Kitchit Dinner Party for 8

image

Sennheiser HD 650 Headphones

iRobot Roomba or Scooba

LG Electronics 50LN5400 50-Inch 1080p 120Hz LED-LCD HDTV with Smart Share

1982 Port Ellen 30yr Scotch

2nd Prize: Awarded to the participant with the second highest number of points.

Your choice of:

Indoor Sky Diving Party for 5

Kitchit Dinner Party for 4

Nexus 7 32G

Sonos Play 3

Breville Smart Oven Convection Toaster Oven with Element IQ

1996 Opus One Napa Valley Proprietary Red

3rd Prize: Awarded to the participant with the third highest number of points.

Cheetah: First person to 250 points. Start off the contest with a bang!

Top Reviewer: Awarded to the participant with the highest number of points from code reviews only. Show off your code review skillz!

Top Slacker: Awarded to the participant with the highest number of points gained, who starts AFTER October 14, 11:59pm PST.

Your choice of:

Kitchit Dinner Party for 2

SodaStream Pure w/ Soda Starter Kit

Sennheiser HD 558 Headphones

Kindle Paperwhite, 3G

Breville Compact Smart Oven Toaster Oven with Element IQ

Macallan 18 year old Single Malt

Best Yak Shave: Awarded to the participant with the most ridiculous patch, or series of patches, required to solve a “simple” problem.

Your very own Shaving Yak Action Figure!

**Your choice of: **

Indoor Sky Diving for 2

SodaStream Genesis w/ Soda Starter Kit

Sennheiser HD-280 PRO Headphones

Kindle Paperwhite, Wi-Fi

AeroGarden Classic 7-Pod with Gourmet Herb Seed Kit

2003 Dom Perignon

*For non-MTV based winners, suitable equivalent prizes will be awarded if necessary.

How to Participate

  1. Complete any of the tasks below.
  2. For each task, submit your participation info using this form. Sorry, any task completed, but not submitted in this manner, will not be eligible.
  3. Profit!

Points

Points are not additive. Each patch can only apply to one category.

C++ or Python change fix to endemic test flakiness

  • Points: 50
  • Examples: reset some state between test runs or fix a race condition in run-webkit-tests.

Fix a line or flaky test in TestExpectations that crashes

  • Points: 5/line
  • Includes [ Pass Crash ] lines.

Fix a line or flaky test in TestExpectations that doesn't crash

  • Points: 3/line

Remove a line from TestExpectations (with no other changes, whitespace and comment lines do *not* count)

  • Points: 1/line
  • This is for TestExpectations-only changes. If you made a non-TestExpectations-only change that fixed more tests than you realized, you can still submit a TestExpectations-only change for 3 points per line.

Code review a C++/Python change

  • Points: 3
  • Again, pure TestExpectations-only changes don't count.
  • It‘s the reviewer’s responsibility to fill out the form above.

How To

Here are some ideas for ways to identify test flakiness/ordering issues.

1. Fix filed test ordering bugs (good way to find C++ endemic flakiness issues!!)

https://code.google.com/p/chromium/issues/list?q=label:LayoutTestOrdering

2. Run tests in a random order and diagnose failures

  1. Run “run-webkit-tests --order=random --no-retry”.
  2. Run “./Tools/Scripts/print-test-ordering” and save the output to a file. This outputs the tests run in the order they were run on each content_shell instance.
  3. For each test that fails:
    1. Find which worker it ran on.
    2. Create a file that contains only the tests run on that worker in the same order as in your saved output file.
    3. run-webkit-tests --child-processes=1 --order=none --test-list=path/to/file/from/previous/step
    4. If the test doesn‘t fail here, then the test itself is probably just flaky. If it does, remove some lines from the file created in step 2.2 and repeat step 3. Continue repeating until you’ve found the dependency. If the test fails when run by itself, but passes on the bots, that means that it depends on another test to pass. In this case, you need to generate the list of tests run by “run-webkit-tests --order=natural” and repeat this process to find which test causes the test in question to *pass* (e.g. crbug.com/262793).
    5. File a bug and give it the LayoutTestOrdering label, e.g. crbug.com/262787 or crbug.com/262791

3. Run tests in isolation

Run “run-webkit-tests --batch-size 1 --no-retry”. This starts up a new content_shell instance for each test. Tests that fail when run in isolation but pass when run as part of the full test suite represent some state that we‘re not properly resetting between test runs or some state that we’re not properly setting when starting up content_shell. You might want to run with --time-out-ms=60000 to weed out tests that timeout due to waiting on content_shell startup time.

4. Diagnose especially flaky tests

  1. Load the flakiness dashboard
  2. Tweak the flakiness threshold to the desired level of flakiness.
  3. Click on “layout-tests” to get that list of flaky tests.
  4. Diagnose the source of flakiness for that test.

5. Fix any line in TestExpectations