MLModelScope considers model specification as a resource that’s consumed and is versioned with dependence on the weights, pre/post-processing, and the framework. Multiple versions of models can exist within the system. For example, the same version of a model deployed on different machines can be used to provide load balancing. The model specfication (also called a model manifest) is defined with the following format:

name: InceptionNet # name of your model
version: 1.0 # version information in semantic version format
framework: # the framework to use
  name: MXNet # framework for the model
  version: ^0.1 # framework version constraint
container: # containers used to perform model prediction
  # multiple platforms can be specified
  amd64: # if unspecified, then the default container for the framework is used
    gpu: raiproject/MLModelScope-mxnet:amd64-cpu
    cpu: raiproject/MLModelScope-mxnet:amd64-gpu
    cpu: raiproject/MLModelScope-mxnet:ppc64le-gpu
    gpu: raiproject/MLModelScope-mxnet:ppc64le-gpu
description: >
  An image-classification convolutional network.
  Inception achieves 21.2% top-1 and 5.6% top-5 error on the ILSVRC 2012 validation dataset.
  It consists of fewer than 25M parameters.
references: # references to papers / websites / etc.. describing the model
# license of the model
license: MIT
# inputs to the model
  # first input type for the model
  - type: image
    # description of the first input
    description: the input image
    parameters: # type parameters
      dimensions: [1, 3, 224, 224]
  # the type of the output
  type: feature
  # a description of the output parameter
  description: the output label
    # type parameters
before_preprocess: >
preprocess: >
after_preprocess: >
before_postprocess: >
postprocess: >
after_postprocess: >
model: # specifies model graph and weights resources
  graph_path: Inception-BN-symbol.json
  weights_path: Inception-BN-0126.params
    false # if set, then the base_url is a url to an archive
    # the graph_path and weights_path then denote the
    # file names of the graph and weights within the archive
attributes: # extra network attributes
  kind: CNN # the kind of neural network (CNN, RNN, ...)
  training_dataset: ImageNet # dataset used to for training
  published_accuracy: xxx # the accuracy published in paper
  expected_accuracy: xxx # the accuracy measured by MLModelScope
  manifest_author: abduld

Meta Information

A model manifest contains some meta-information to identify the model such as: name, description, model version, references, and licence. Extra meta-information can be specified as a key-value map withing the attributes fields.


Each neural network model is associated with a framework. The combination of framework name and semantic version form a constraint which is solved by the MLModelScope framework (more information is found in the Resolving Framework section). This allows models to be vendored for each framework version. This is specifically important to support framework versions which break backward compatibility. An error occurs if MLModelScope cannot resolve the framework.

!> Note the model version is unrelated to the framework version.

Docker Containers

A user may wish to run a model within a different container than the one defined for the framework. This can be because the model requires extra pieces of software and/or custom layers. If a docker container is not specified, then the container defined for the resolved framework is used, otherwise the container specified supersedes the one defined by the framework.

The containers currently must be published on the Dockerhub registry. An error is returned if no container is found.

Pre- and Post-Processing Code

A model manifest author can specify operations to occur before or after model inference. Defining code in the preprocess or post-process fields nullifies the automatic pre- and post-processing steps that MLModelScope does — images are not auto-resized for example. This allows the manifest author to perform custom operations that are required by the model, without making modifications to the inference engine. The pre-processing code gets executed on the input code before it’s fed into the model inference engine (the same is for the post-processing code).

Model Resources

Model resources are stored within the cloud and are downloaded on demand. MLModelScope hosts its builtin models using S3, but any HTTP entry point or GIT resource is supported.

Input and Output Types

A model’s inputs and output are defined within the manifest.


An image can be encoded in PNG, JPEG, or GIF format.

An image type can have one or more of the following parameters:

  1. dimensions specifies the dimensions of the image. The image is auto-resized to the specified dimensions if no pre- or post-processing code is set.
  2. mean specifies the mean value (or vector) to be subtracted from each pixel to normalize the image. Zero is used if unspecified.
  3. color_space TODO

Resolving Framework

A model requires a compatible framework. The compatible framework versions is specified in the framework > version field using semantic version format. MLModelScope generates a constraint using the semantic version and uses the highest version that satisfies the constraint. Although not advised, a model has “latest” value in the framework > version field, then the latest advertised framework is used to perform inference on the model.