Tool Harvest is designed to harvest information from up to two DUTs connected to the workstation. It is actually a collection of tools with each tool invoked using the -tool
cmd-line option: harvest -tool <tool-name> [options for tool-name]
. Available tool names are:
These tools have multiple options that are configured through a JSON configuration file. Sample config files are available in sub-dir ./sample-config.
Harvest takes several command-line options. They are:
profile
).Harvest is configured through JSON configuration file that has the following structure:
{ "TargetDevice1": { Profile configuration for 1st target device. }, "TargetDevice2": { Profile configuration for 2nd target device. }, "Harvest": { Harvest uses the config info in this section for the profile tool. }, "DeviceInfoTool": { Harvest uses the information in this section for the device-info tool. }, "GpuVis": { Harvest uses the information in this section for the gpuvis tool. } }
Tip: Harvest ignores config properties that it does not recognize. That is useful for turning options on and off. For instance, if you want to run Profile on only the 1st target device, you can temporarily disable the 2nd target device by renaming it something like "disabled_TargetDevice2"
.
Up to two target devices are specified with properties "TargetDevice1"
and "TargetDevice2"
. Each of these properties contains the entire Profiler configuration, including how to reach the device through SSH and how to run profiles. For a complete description, see the README.md file for Profiler.
A few things to note:
-verbose
. Once you have that working, plug in the profiler config inside the TargetDevice section for Harvest. "TargetDevice1": { "include": "relative path to profile config for this device" }
Use Harvest's profile
tool option to run traces on target devices and collect performance data:
> harvest -config <path to config file> -tool profile
Harvest automates several tasks:
This is configured through the "Harvest"
section in the config file, which looks as follows:
"Harvest": { "traceCacheDir": "local dir path where trace and trace archives are cached", "keepTracesInCache": true, "profileBinPath": "path to companion tool Profile", "delay": 60, "traces": [ list of trace or game archives, in Google Storage or in local dir ] },
Recall that how profiling is carried out on each target devices is configured through the TargetDevice1/2
configuration.
To collect device info from the target devices, use tool option device-info
:
> harvest -config <path to config file> -tool device-info
Again, Harvest automates a number of tasks:
get_device_info
to the target device.get_device_info
to collect the data in a form of protobufs.bq_insert_pb.by
. (See readme in results_database).Collecting device info is configured through the "DeviceInfoTool"
section in the config file. It looks as follows:
{ "DeviceInfoTool": { "getDeviceInfoBinPath": "<path to>/bin/get_device_info", "protoBufsOutputDir": "<path to folder for>/protobufs", "uploadScript": "<path to>/results_database/bq_insert_pb.py", "owner": "user/gwink", // optional owner "machine": { "enabled": true, "uploadToDb": "yes" }, "software": { "enabled": true, "skipPackages": false, "alsoRunOnParent": true, "uploadToDb": "yes" } }, }
Important notes:
"TargetDevice1/2"
config section to form file names for the protobufs. For example machine_info_gwink-helios-C278883.json
."uploadToDb"
is enabled, Harvest compares the new protobuf data with previous protobufs found in dir protoBufsOutputDir
. It uploads the new protobufs only if there are meaningful differences. (E.g. it ignores the creation time.) Obviously, that only works if the machine name and dir protoBufsOutputDir
folder for protobufs are kept consistent between runs.Tool option gpuvis
is intended to make collecting low-level trace data and visualizing it with GpuVis easier. It is invoked as follows:
> harvest -config <path to config file> -tool gpuvis
Tool gpuvis requires having a local build of GpuVis, including all its data-collection scripts. It is configured with the "GpuVis"
section in the configuration file:
{ "GpuVis": { // Path to working dir on chrome-OS device. "chromeOSWorkingDir": "gpuperf", // Env var to use on chrome-OS device when gathering trace data. "chromeOSEnvVar": [ "USE_I915_PERF=1", "I915_PERF_METRIC=RenderPipeProfile" ], // GpuVis dir on local machine. GpuVis binaries and scripts should be available // from this dir. Trace data will also be stored here. "localGpuVisDir": "/home/gwink/Gaming/gpuvis", // What frame within trace to loop on and how many time. "frameLoop": "3500-3509,300", // How to invoke gpuvis, relative to localGpuVisDir above. Set to empty string // if you do not want to run gpuvis on collected trace data. "runGpuVis": "./gpuvis" } }
For Googlers
Googlers will find more info about setting up a Chrome-OS device for gathering and visualizing GpuVis in the GpuVis How-to document.