You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NN clusterizer: Bug-fixes and adding deterministic mode (#14530)
* Adding first version of kernel timers
* Removing GPU_CONFIG_KEY from dpl-workflow.sh to set my own values
* Bug fixes
* undoing changes in dpl-workflow.sh
* Furhter fixes and beautifications
* Please consider the following formatting changes
* Removing unused timers
* Moving Stop() of classification timer
* Adding force method to fill input like it is done on GPU
* Removing unnecessary static asserts
* Adding deterministic mode (unfortunately that did not make it deterministic on GPU -> general problem with ONNX)
* Please consider the following formatting changes
* Adjusting for comment
* Adding deterministic mode
* Please consider the following formatting changes
---------
Co-authored-by: ALICE Action Bot <alibuild@cern.ch>
AddOption(nnInferenceIntraOpNumThreads, int, 1, "", 0, "Number of threads used to evaluate one neural network (ONNX: SetIntraOpNumThreads). 0 = auto-detect, can lead to problems on SLURM systems.")
257
257
AddOption(nnInferenceInterOpNumThreads, int, 1, "", 0, "Number of threads used to evaluate one neural network (ONNX: SetInterOpNumThreads). 0 = auto-detect, can lead to problems on SLURM systems.")
258
258
AddOption(nnInferenceEnableOrtOptimization, unsignedint, 99, "", 0, "Enables graph optimizations in ONNX Runtime. Can be [0, 1, 2, 99] -> see https://github.com/microsoft/onnxruntime/blob/3f71d637a83dc3540753a8bb06740f67e926dc13/include/onnxruntime/core/session/onnxruntime_c_api.h#L347")
259
+
AddOption(nnInferenceUseDeterministicCompute, int, 0, "", 0, "Enables deterministic compute in ONNX Runtime were possible. Can be [0, 1] -> see https://github.com/microsoft/onnxruntime/blob/3b97d79b3c12dbf93aa0d563f345714596dc8ab6/onnxruntime/core/framework/session_options.h#L208")
259
260
AddOption(nnInferenceOrtProfiling, int, 0, "", 0, "Enables profiling of model execution in ONNX Runtime")
260
261
AddOption(nnInferenceOrtProfilingPath, std::string, ".", "", 0, "If nnInferenceOrtProfiling is set, the path to store the profiling data")
261
262
AddOption(nnInferenceVerbosity, int, 1, "", 0, "0: No messages; 1: Warnings; 2: Warnings + major debugs; >3: All debugs")
@@ -275,6 +276,8 @@ AddOption(nnClassThreshold, float, 0.5, "", 0, "The cutoff at which clusters wil
275
276
AddOption(nnRegressionPath, std::string, "network_reg.onnx", "", 0, "The regression network path")
276
277
AddOption(nnSigmoidTrafoClassThreshold, int, 1, "", 0, "If true (default), then the classification threshold is transformed by an inverse sigmoid function. This depends on how the network was trained (with a sigmoid as acitvation function in the last layer or not).")
277
278
AddOption(nnEvalMode, std::string, "c1:r1", "", 0, "Concatention of modes, e.g. c1:r1 (classification class 1, regression class 1)")
279
+
AddOption(nnClusterizerUseClassification, int, 1, "", 0, "If 1, the classification output of the network is used to select clusters, else only the regression output is used and no clusters are rejected by classification")
280
+
AddOption(nnClusterizerForceGpuInputFill, int, 0, "", 0, "Forces to use the fillInputNNGPU function")
278
281
// CCDB
279
282
AddOption(nnLoadFromCCDB, int, 0, "", 0, "If 1 networks are fetched from ccdb, else locally")
280
283
AddOption(nnLocalFolder, std::string, ".", "", 0, "Local folder in which the networks will be fetched")
0 commit comments