Given a trained TensorFlow model, tf.native generates C++ code to run the model on an input example and return a value. This code is generated offline and committed at native_inference.h
and native_inference.cc
.
A lot of the generated code can be removed or simplified:
#ifdef
wrappers that are never triggered, and macros that expand to no-ops#define
constants instead of scoped constexpr
sApproximately the following steps will clean up the generated code:
USE_EIGEN
and OP_LIB_BENCHMARK
#ifdef
s, and delete those #ifdef
wrappers (a utility like unifdefall
may help). Also delete usage of no-op macros such as BENCHMARK_TIMER
.tab_ranker::native_inference
namespace.native_inference.h
location included in native_inference.ccdnn_input_from_feature_columns_input_from_feature_columns_concat0Shape
.Inference
function.constexpr int
s for weight, feature and bias sizes, and improve their names.assert()
calls with CHECK()
, and include "base/logging.h"
.<cassert>
.When updating the model, it may be easier to keep the existing native_inference.* files and simply replace the constants, including weights, biases, and array sizes. Check if the generated native_inference.cc file uses functions that may need to be added back to the prettified version or calls these functions in a different order.