Chrome collects performance data both in the lab and from end users. There are thousands of individual metrics. This is an overview of how to sort through them at a high level.
At a high level, performance work in Chrome is categorized into domains, like loading, memory, and power. Each domain has critical laboratory and end-user metrics associated with it.
Chrome has multiple performance labs in which benchmarks are run on continuous builds to pinpoint performance regressions down to individual changelists.
The main lab for performance monitoring is chrome.perf. It continuously tests chromium commits and is monitored by several perf sheriff rotations.
There are several other performance labs for specialized use:
The Speed Launch Metrics doc explains metrics available in UMA for end user performance. If you want to test how your change impacts these metrics for end users, you'll probably want to Run a Finch Trial.
The UMA Sampling Profiler (Googlers only) measures Chrome execution using statistical profiling, producing aggregate execution profiles across the function call tree. The profiler is useful for understanding how your code performs for end users, and the precise performance impact of code changes.