The fuchsia-binary-size trybot exists for two reasons:
The bot provides analysis using:
Generate commit size analysis files
step of the trybotGenerate Diffs
step. This summarizes how each package grew, shrank, or stayed the same.cast_runner
and web_engine
.Read diff results
stdout will also give a breakdown of growth by package, if any:{ "archive_filenames": [], "compressed": { "cast_runner": 0, "chrome_fuchsia": 40960, # chrome_fuchsia = web_engine + cast_runner "web_engine": 40960 # This package grew by 40kB (post-compression) }, "links": [], "status_code": 1, [...] "uncompressed": { "cast_runner": 0, "chrome_fuchsia": 33444, "web_engine": 33444 # This package grew by 32.66kB (pre-compression) } }
cast_runner
grew in size, you may need assistance from the Chrome-Fuchsia team (fuchsia-dev@chromium.org).If you find it should be removed from a size-constrained platform, you should guard the code with a BUILDFLAG
and disable the associated feature in size-constrained builds by explicitly setting the GN arg in size_optimized_cast_receiver_args.gn
.
Add a footer to the commit description along the lines of:
Fuchsia-Binary-Size: Size increase is unavoidable.
Fuchsia-Binary-Size: Increase is temporary.
Fuchsia-Binary-Size: See commit description.
<-- use this if longer than one line.Fuchsia-Binary-Size:
and other footers.The size metric we care about the most is the compressed size. This is an estimate of how large the Chrome-Fuchsia packages will be when delivered on device (actual compression can vary between devices, so the computed numbers may not be accurate). However, you may see the uncompressed and compressed size grow by different amounts (and sometimes the compressed size is larger than the uncompressed)!
This is due to how sizes are calculated and how the compression is done. The uncompressed size is exactly that: the size of the package before it is compressed.
Compression is done via the blobfs-compression
tool, exported from the Fuchsia SDK. This compresses the file into a package that is ready to be deployed to the Fuchsia device. With the current (default) compression-mode, this compresses the package in page-sizes designed for the device and filesystem. Since each page is at least 8K, increments in the compressed size are always multiples of 8K with the current compression mode. So, if your change causes the previous compression page to go over the limit, you may see an 8K increase for an otherwise small change.
Large changes will increase more than a page‘s work (to at least 16K), which is why we only monitor 12K+ changes (12K isn’t possible due to the 8K page size) and not 8K+.
You are responsible only for pre-compression size increases. If your change did not cause a pre-compression size increase, but still failed the builder, please ignore it using the Fuchsia-Binary-Size:
footer.
If you want to check your changes impact to binary size locally (instead of against the trybot), do the following:
Add the following to your .gclient
config:
{ # ... whatever you have here ... "solutions": [ { # ...whatever you have here... } ], "target_os": [ "fuchsia" ], }
Then run gclient sync
to add the fuchsia-sdk to your third_party
directory.
Set up a build directory with the following GN Args:
import("//build/config/fuchsia/size_optimized_cast_receiver_args.gn") dcheck_always_on = false is_debug = false is_official_build = true target_cpu = "arm64" target_os = "fuchsia" use_goma = true # If appropriate.
Build the fuchsia_sizes
target:
autoninja -C <path/to/out/dir> fuchsia_sizes
Run the size script with the following command:
build/fuchsia/binary_sizes.py --build-out-dir <path/to/out/dir>
The size breakdown by blob and package will be given, followed by a summary at the end, for chrome_fuchsia
, web_engine
, and cast_runner
. The number that is deployed to the device is the compressed
version.
TODO(crbug.com/1296349): Fill this out.
(shamelessly stolen from this doc, but looking for any tips on how to improve this for Fuchsia specifically)
Look at blobs and see that there aren't a huge number of blobs added in.
Bloaty can be used to determine the composition of the binary (and can be helpful for determining the cause of the increase).
<out dir>/web_engine_exe
and <out dir>/cast_runner_exe
). However, if you want more information, you will have to run it against the unstripped binaries (located in <out dir>/exe.unstripped
. You only need to run Bloaty against the binary your change affected.$ bloaty -d compileunits,symbols $OUT_DIR/exe.unstripped/web_engine_exe \ -n $ROW_LIMIT -s vm
-n $ROW_LIMIT
determines the number of rows to show per level before collapsing. Setting to 0
shows all rows. Default is 20.
-s vm
indicates to sort by Virtual Memory (VM) size increase. This is the metric that grows somewhat closely to the binary-size bot's size metric.
NOTE: that the sizes reported from Bloaty will not be exactly the same as those reported by the binary_sizes
script since Bloaty analyzes the uncompressed (and potentially unstripped) binary, but the reported relative growth can point you in the right direction. The File Size
can vary a lot due to debug symbol information. The VM Size
is usually a good lead.
If Bloaty reports your change decreased the uncompressed size, use a footer to ignore the check.
You can also directly generate a comparison with the following:
$ bloaty -d compileunits,symbols \ $OUT_DIR_WITH_CHANGE/exe.unstripped/web_engine_exe -n $ROW_LIMIT -s vm -- \ $OUT_DIR_WITHOUT_CHANGE/exe.unstripped/web_engine_exe
You can find out more about sections of ELF binaries here.