Related Work

With the fast paced nature of ML innovations, the lack of a standard for publishing, evaluating and profiling ML models is a significant pain point for AI consumers. Application builders — who may have limited ML knowledge — struggle to discover and experiment with state-of-the-art models within their application pipelines. Data scientists find it difficult to reproduce, reuse, or gather unbiased comparison between published models. And, finally, system developers often fail to keep up with current trends, and lag behind in measuring and optimizing frameworks, libraries, and hardware. There are concerted efforts by ML stakeholders to remedy this. This section describes some of these efforts and how MLModelScope is different.


To compare models and HW/SW stacks, both research and industry have developed coarse-grained (model-level) and fine-grained (layer-level) benchmark suites. The benchmarks are designed to be manually run offline and the benchmark authors encourage the users to submit their evaluation results. The evaluation results are curated to provide a score board of model performance across systems.

  • Reference Workloads — There have been efforts to codify a set of ML applications [1,2] that are representative of modern AI workloads — to enable comparisons between hardware stacks. These workloads are aimed at serving a purpose similar to that of SPEC benchmarks for CPUs.
  • Model Benchmark Suites — There has been work to replicate and measure the performance of published ML models [3,4,5,6,7,8,9,10]. These benchmark suites provides scripts to run frameworks and models to capture the end-to-end time of model execution and evaluate their accuracy.
  • Layer Benchmark Suites — At the other end of the spectrum, system and framework developers have developed sets of fine-grained benchmarks that profile specific layers - [3,11,12,13]. The targets users of these benchmarks are compiler writers (to propose new transformations and analysis for loop structures found within ML kernels), and system researchers (to propose new hardware to accelerate ML workloads).

Model Zoo

Curated model repositories 14,15,16,17,18,19,20,21,22 are currated by frameworks. These framework specific model zoos are used for testing or demonstrating the kinds of models a framework supports. There are also catalogs of models [23,24] or public hubs linking ML/DL papers with their corresponding code [25].

Artifact Management Frameworks

[26] proposes a model catalog design to store and search developed models. The design also includes a model versioning scheme and a domain specific language for searching through model catalog. [27] manages ML models and experiments by maintaining metadata and links to the artifacts. It also provides a web UI to visualize or compare experiment results. [28] defines a common layer of abstractions to represent ML models and pipelines, and provides a web front end for visual exploration. FAI-PEP [29] is a benchmarking framework targeting at mobile devices, and features performance regression detection.

Profiling Tools

The current practice of measuring and profiling ML models is cumbersome. It relies on the use of a concoction of tools that are aimed at capturing ML model performance characteristics at different granularities, or levels, within the HW/SW stack. Across-stack profiling thus means the use of multiple tools and stitching of their outputs, which is often done ad-hoc and manually by researchers.
It is difficult or sometimes impossible to stitch and correlate results from these disjoint profiling tools to get a consistent across-stack timeline.

  • To profile the application or model level, one must manually log the time taken by the important steps within the pipelines.
  • To profile framework level, one enables the built-in, or community contributed, framework profiler [30,31,32] — which usually outputs the profile to a file. These framework profilers are typically bundled with the frameworks, and aim to help users understand the framework’s layer performance and execution pipeline,
  • To understand the model performance within a layer, one either uses tools to intercept and log library calls (through tools such as strace[33] or DTrace[34]), or uses hardware vendors’ profilers (such as NVIDIA’s nvprof, NVVP, Nsight[35,36,37] or Intel’s VTune[38])
  • To capture hardware and OS level events, one uses a different set of tools, such as PAPI and Perf.

We observe that the inability to rapidly understand state-of-the art model performance is partly due to the lack of tools or methods that allow researchers to introspect model performance across the HW/SW stack — while still being agile to cope with the diverse and fast paced nature of the ML landscape.


Because of the lack of an ML model publishing and evaluation standard, models shared through repositories (e.g. GitHub) — where the authors may have information on the HW/SW stack requirements and ad-hoc scripts to run the experiments — are hard to reproduce. CK [39] is a community driven Python framework to abstract, reuse and share R&D workflows and Python modules. To ensure reproducibility, CK uses JSON meta-descriptions to describe the software stack of the workflows. Similar to a python package manager, CK manages workflows and the corresponding Python modules. Other solutions that leverage Nix[40], Spack[41], and docker[42] exist.


  1. MLPerf,
  2. HPE Deep Learning Performance Guide,
  3. AI-Matrix,
  4. Fathom: Reference workloads for modern deep learning methods, IISWC 2016.
  5. DAWNBench: An End-to-End Deep Learning Benchmark and Competition, SOSP 2017.
  6. Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance Benchmark,
  7. DNNMark: A Deep Neural Network Benchmark Suite for GPUs, GPGPU 2017.
  8. Latency and Throughput Characterization of Convolutional Neural Networks for Mobile Computer Vision,
  9. Performance analysis of CNN frameworks for GPUs, ISPASS 2017.
  10. TBD: Benchmarking and Analyzing Deep Neural Network Training, IISWC 2018.
  11. DeepBench,
  12. ConvNet Benchmarks,
  13. LSTM Benchmarks for Deep Learning Frameworks,
  14. Caffe2 Model Zoo,
  15. Caffe Model Zoo,
  16. Gluon CV,
  17. Gluon NLP,
  18. ONNX Model Zoo,
  19. TensorFlow Detection Model Zoo,
  20. TensorFlow-Slim Image Classification Model Library,
  21. TensorFlow Hub,
  22. PyTorch Vision,
  23. Modelhub,
  24. ModelZoo,
  25. Papers with Code,
  26. Modelhub: Deep learning lifecycle management, ICDE 2017.
  27. Runway: machine learning model experiment management tool, SysML 2018.
  28. ModelDB: a system for machine learning model management,
  29. Facebook AI Performance Evaluation Platform,
  30. TensorFlow Profiler,
  31. MXNet Profiler,
  32. PyTorch Autograd,
  33. strace,
  34. dtrace,
  35. nvprof,
  36. Nvidia Visual Profiler,
  37. Nsight,
  38. Intel Vtune,
  39. A collective knowledge workflow for collaborative research into multi-objective autotuning and machine learning techniques,
  40. Nix Package Manager,
  41. Spack Package Manager,
  42. Docker,