Sunday, January 17, 2010

Table of results for MNIST dataset

This is a table documenting some of the best results some paper obtained in MNIST dataset.

Results shown indicates the error obtained by training on all 60,000 samples and testing on 10,000 samples.
  1. Multi-column Deep Neural Networks for Image Classification (CVPR 2012)
    Cited 9 times. 0.23%

    Supplemental materialTechnical Report
  2. Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition (2010)
    Cited 1 time. 0.35%
    Additional info: 6-layer NN 784-2500-2000-1500-1000-500-10 (on GPU) [elastic distortions]
  3. Efficient Learning of Sparse Representations with an Energy-Based Model (2006)
    Cited 109 times. 0.39%
    Additional info: large conv. net, unsup pretraining [elastic distortions]
  4. Stochastic Pooling for Regularization of Deep Convolutional Neural Networks (2013)
    Cited 1 times. 0.47%
    Additional info: Stochastic Pooling
  5. Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis (2003)
    Cited 190 times. 0.4%
  6. What is the Best Multi-Stage Architecture for Object Recognition? (ICCV 2009)
    Cited 39 times. 0.53%
    Additional info: large conv. net, unsup pretraining [no distortions]
  7. Deformation Models for Image Recognition (PAMI 2007)
    Cited 46 times. 0.54%
    Additional info: K-NN with non-linear deformation (IDM) (Preprocessing: shiftable edges)
  8. A trainable feature extractor for handwritten digit recognition (2007)
    Cited 38 times. 0.54%
    Additional info: Trainable feature extractor + SVMs [affine distortions]
  9. Training Invariant Support Vector Machines (2002)
    Cited 281 times. 0.56%
    Additional info: Virtual SVM, deg-9 poly, 2-pixel jittered (Preprocessing: deskewing)
  10. Simple Methods for High-Performance Digit Recognition Based on Sparse Coding (TNN 2008)
    0.59%
    Additional info: unsupervised sparse features + SVM, [no distortions]
  11. Unsupervised learning of invariant feature hierarchies with applications to object recognition (CVPR 2007)
    Cited 119 times. 0.62%
    Additional info: large conv. net, unsup features [no distortions]
  12. Shape matching and object recognition using shape contexts (PAMI 2002)
    Cited 2089 times. 0.63%
    Additional info: K-NN, shape context matching (preprocessing: shape context feature extraction)
  13. Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image Features (2012)
    Cited 0 times. 0.64%
  14. Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations (2009)
    0.82%
  15. Large-Margin kNN Classification using a Deep Encoder Network (2009)
    0.94%
  16. Deep Boltzmann Machines (2009)
    0.95%
  17. CS81: Learning words with Deep Belief Networks (2008)
    1.12%
  18. Convolutional Neural Networks (2003)
    1.19%
    More info: The ConvNN is based on the paper "Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis".
  19. Reducing the dimensionality of data with neural networks (2006)
    1.2%
  20. Deep learning via semi-supervised embedding (2008)
    1.5%

2 comments:

szproxy said...

Thanx very much its very imortant!

szproxy said...

Thanx very much. Like comment above, its very important.