navigation

MLModelScope

Codacy Badge Docker Stars Docker pulls standard-readme compliant

The current Deep Learning (DL) landscape is fast-paced and is rife with non-uniform models, hardware/software (HW/SW) stacks, but lacks a DL benchmarking platform to facilitate evaluation and comparison of DL innovations, be it models, frameworks, libraries, or hardware. Due to the lack of a benchmarking platform, the current practice of evaluating the benefits of proposed DL innovations is both arduous and error-prone — stifling the adoption of the innovations.

MLModelScope is a framework- and hardware-agnostic distributed platform for benchmarking and profiling DL models across datasets/frameworks/systems. MLModelScope offers a unified and holistic way to evaluate and inspect DL models, making it easier to reproduce, compare, and analyze accuracy or performance claims of models or systems.

More specifically, MLModelScope:

  • proposes a specification to define DL model evaluations
  • introduces techniques to consume the specification and provision the evaluation workflow using the specified HW/SW stack
  • uses a distributed scheme to manage, schedule, and handle model evaluation requests;
  • defines common abstraction API across frameworks
  • provides across-stack tracing capability that allows users to inspect model execution at different HW/SW abstraction levels
  • defines an automated evaluation analysis workflow for analyzing and reporting evaluation results
  • exposes the capabilities through a web and command-line interface

Note that MLModelScope and CarML are used interchangeably within these documents. CarML (Cognitive Artifacts for Machine Learning) is the internal code name for MLModelScope.

Supported Frameworks

Supported OS

Supported Hardware

  • X86
  • PowerPC
  • ARM
  • GPU
  • FPGA

Contributing

Feel free to dive in! Open an issue or submit PRs. MLModelScope follows the Contributor Covenant Code of Conduct.

Resources