EvalDNN

A Toolbox for Deep Neural Network Models

Overview

EvalDNN is an open-source toolbox for model evaluation of deep learning systems, supporting multiple frameworks and metrics.

Author: Yongqiang Tian*, Zhihua Zeng*, Ming Wen, Yepang Liu, Tzu-yang Kuo, and Shing-Chi, Cheung.

*The first two author contribute equally.

This project is mainly supported by Microsoft Asia Cloud Research Software Fellow Award 2019.

Demo video

Supported Frameworks and Metrics

EvalDNN supports the model based on following frameworks:

  • TensorFlow
  • PyTorch
  • Keras
  • MXNet

EvalDNN supports the model based on following metrics:

  • Top-K accuracy
  • Neuron Coverage
  • Robustness

Usage of EvalDNN

Please check the documents in EvalDNN GitHub here

Benchmarks

Results

All benchmark results are available here.

Reproduction

Our code is uploaded to GitHub repo here.

The following is our configuration.

  • Python 3.7.5
  • TensorFlow 1.15
  • PyTorch 1.3.1
  • MXNet 1.5.1
  • GluonCV 0.5.0
  • Keras 2.3.1
  • CUDA 10.0
  • All experiments are conducted on Azure NC6 instances.

The images id used for evaluating model robustness can be download from here