Sunday, February 27, 2011

Table of results for CIFAR-10 dataset

This is a table documenting some of the best results some paper obtained in CIFAR-10 dataset.
  1. Spatially-sparse convolutional neural networks (ARXIV 2014)
    Cited 12 times. 93.72%
    Additional info: DeepCNiN(5,300) With data augmentation and Network-in-Network layers
  2. Deep Residual Learning for Image Recognition (ARXIV 2015)
    Cited 1 time. 93.57%
    Additional info: ResNet 110 layers, 1.7 million parameters
    Link to slides
  3. Deeply supervised nets (ARXIV 2014)
    Cited 66 times. 91.78%
    Additional info: With data augmentation. Without data augmentation, it's 90.31%
    Link to paper's project page
  4. Network In Network (ARXIV 2013)
    Cited 4 times. 91.19%
    Additional info: NIN + Dropout + Data Augmentation. Without data augmentation, it's 89.59%
    Link to source code at github
  5. Regularization of Neural Networks using DropConnect (ICML 2013)
    Cited 0 times. 90.68%
    Additional info: Voting with 12 DropConnect networks. With data augmentation.
    Link to project page (Contains source code etc)
    Link to Supplementary Material
    Link to slides
  6. Maxout networks (ICML 2013)
    Cited 90 times. 90.62%
    Additional info: Consists of 3 convolution maxout layers & a fully connected softmax layer, training data is augmented with translations & horizontal reflections.
  7. Link to project page (source code included)
  8. Multi-Column Deep Neural Networks for Image Classification (CVPR 2012)
    Cited 170 times. 88.79%
    Link to technical Report
    Link to Supplemental material
  9. Deep Learning using Linear Support Vector Machines (ARXIV 2013)
    Cited 2 times. 88.1%
  10. Practical Bayesian Optimization of Machine Learning Algorithms (NIPS 2012)
    Cited 121 times. 85.02%
    Additional info: With data augmented with horizontal reflections and translations, 90.5% accuracy on  test set is achieved.
  11. Least Squares Revisited: Scalable Approaches for Multi-class Prediction (ARXIV 2013)
    Cited 0 times. 85%
  12. Stochastic Pooling for Regularization of Deep Convolutional Neural Networks (ICLR 2013)
    Cited 34 times. 84.87%
    Additional info: Stochastic-100 Pooling
    Link to paper's project page
  13. Improving neural networks by preventing co-adaptation of feature detectors (2012)
    Cited 261 times. 84.4%
  14. Understanding Deep Architectures using Recursive Convolutional Network (ARXIV 2013)
    Cited 4 times. 84%
  15. Discriminative Learning of Sum-Product Networks (NIPS 2012)
    Cited 33 times. 83.96%
  16. Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image Features (2012)
    Cited 95 times. 83.11%
  17. Learning Invariant Representations with Local Transformations (2012)
    Cited 12 times. 82.2%
    Additional info: TIOMP-1/T (combined, K= 4,000)
  18. Learning Feature Representations with K-means (NNTOT 2012)
    Cited 35 times. 82%
  19. Selecting Receptive Fields in Deep Networks (NIPS 2011)
    Cited 51 times. 82%
  20. The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization (ICML 2011)
    Cited 202 times. 81.5%
    Source code: Adam Coates's web page
  21. High-Performance Neural Networks for Visual Object Classification (2011)
    Cited 26 times. 80.49%
  22. Object Recognition with Hierarchical Kernel Descriptors (CVPR 2011)
    Cited 55 times. 80%
    Source code: Project web page
  23. An Analysis of Single-Layer Networks in Unsupervised Feature Learning (NIPS Workshop 2010)
    Cited 296 times. 79.6%
    Additional info: K-means (Triangle, 4000 features)
    Homepage: Link
  24. Making a Science of Model Search (2012)
    Cited 65 times. 79.1%
  25. Convolutional Deep Belief Networks on CIFAR-10 (2010)
    Cited 34 times. 78.9%
    Additional info: 2 layers
  26. Spike-and-Slab Sparse Coding for Unsupervised Feature Discovery (2012)
    Cited 11 times. 78.8%
  27. Pooling-Invariant Image Feature Learning (ARXIV 2012)
    Cited 1 times.  78.71%
    Additional info: 1600 codes, learnt using 2x PDL
  28. Semiparametric Latent Variable Models for Guided Representation (2011)
    Cited 0 times. 77.9%
  29. Learning Separable Filters (2012)
    Cited 1 times. 76%
  30. Kernel Descriptors for Visual Recognition (NIPS 2010)
    Cited 57 times. 76%
    Additional info: KDES-A
    Source code: Project web page
  31. Image Descriptor Learning Using Deep Networks (2010)
    Cited 0 times. 75.18%
  32. Improved Local Coordinate Coding using Local Tangents (ICML 2010)
    Cited 39 times. 74.5%
    Additional info: Linear SVM with improved LCC
  33. An Analysis of the Connections Between Layers of Deep Neural Networks (ARXIV 2013)
    Cited 0 times. 73.2%
    Additional info: 
    2 layers (K = 2, random RF)
  34. Tiled convolutional neural networks (NIPS 2010)
    Cited 33 times. 73.1%
    Additional info: Deep Tiled CNNs (s=4, with finetuning)
    Source code: 
    Quoc V. Le's web page
  35. Semiparametric Latent Variable Models for Guided Representation (2011)
    Cited 0 times. 72.28%
    Additional info: Alpha = 0.01
  36. Modelling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines (CVPR 2010)
    Cited 84 times. 71%
    Additional info: mcRBM-DBN (11025-8192-8192), 3 layers, PCA’d images
  37. On Autoencoders and Score Matching for Energy Based Models (ICML 2011)
    Cited 16 times. 65.5%
  38. Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images (JMLR 2010)
    Cited 50 times. 65.3%
    Additional info: 4,096 3-Way, 3 layer, ZCA’d images
  39. Fastfood - Approximating Kernel Expansions in Loglinear Time (ICML 2013)
    Cited 3 times. 63.1%
  40. Learning invariant features through local space contraction (2011)
    Cited 2 times. 52.14%

6 comments:

Jacob said...

This is really great! I've been looking for something like this, and as far as I can see it is pretty complete.

rodrigob said...

The recently published "Selecting receptive fields in deep networks" http://www.stanford.edu/~acoates

reaches 82% on this dataset.

Hao Wooi Lim said...

@rodrigob: Thank you so much for the information. The blog is updated now :)

Unknown said...

Thanks for these results. These are really very helpful

Unknown said...

This is really great. These results are very useful

David Warde-Farley said...

I think you missed this paper at NIPS 2012: 90.5% test accuracy.