- Spatially-sparse convolutional neural networks (ARXIV 2014)

Cited 12 times. 93.72%

Additional info: DeepCNiN(5,300) With data augmentation and Network-in-Network layers - Deep Residual Learning for Image Recognition (ARXIV 2015)

Cited 1 time.**93.57%**

Additional info: ResNet 110 layers, 1.7 million parameters

Link to slides - Deeply supervised nets (ARXIV 2014)

Cited 66 times.**91.78%**

Additional info: With data augmentation. Without data augmentation, it's 90.31%

Link to paper's project page - Network In Network (ARXIV 2013)

Cited 4 times.**91.19%**

Additional info: NIN + Dropout + Data Augmentation. Without data augmentation, it's 89.59%

Link to source code at github - Regularization of Neural Networks using DropConnect (ICML 2013)

Cited 0 times.**90.68%**

Additional info: Voting with 12 DropConnect networks. With data augmentation.

Link to project page (Contains source code etc)

Link to Supplementary Material

Link to slides - Maxout networks (ICML 2013)

Cited 90 times. 90.62%

Additional info: Consists of 3 convolution maxout layers & a fully connected softmax layer, training data is augmented with translations & horizontal reﬂections.
Link to project page (source code included)
- Multi-Column Deep Neural Networks for Image Classification (CVPR 2012)

Cited 170 times.**88.79%**

Link to technical Report

Link to Supplemental material - Deep Learning using Linear Support Vector Machines (ARXIV 2013)

Cited 2 times.**88.1%** - Practical Bayesian Optimization of Machine Learning Algorithms (NIPS 2012)

Cited 121 times.**85.02%**

Additional info: With data augmented with horizontal reﬂections and translations, 90.5% accuracy on test set is achieved. - Least Squares Revisited: Scalable Approaches for Multi-class Prediction (ARXIV 2013)

Cited 0 times.**85%** - Stochastic Pooling for Regularization of Deep Convolutional Neural Networks (ICLR 2013)

Cited 34 times.**84.87%**

Additional info: Stochastic-100 Pooling

Link to paper's project page - Improving neural networks by preventing co-adaptation of feature detectors (2012)

Cited 261 times.**84.4%** - Understanding Deep Architectures using Recursive Convolutional Network (ARXIV 2013)

Cited 4 times.**84%** - Discriminative Learning of Sum-Product Networks (NIPS 2012)

Cited 33 times.**83.96%** - Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image Features (2012)

Cited 95 times.**83.11%** - Learning Invariant Representations with Local Transformations (2012)

Cited 12 times.**82.2%**

Additional info: TIOMP-1/T (combined, K= 4,000) - Learning Feature Representations with K-means (NNTOT 2012)

Cited 35 times.**82%** - Selecting Receptive Fields in Deep Networks (NIPS 2011)

Cited 51 times.**82****%** - The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization (ICML 2011)

Cited 202 times.**81.5%**

Source code: Adam Coates's web page - High-Performance Neural Networks for Visual Object Classification (2011)

Cited 26 times.**80.49%** - Object Recognition with Hierarchical Kernel Descriptors (CVPR 2011)

Cited 55 times.**80%**

Source code: Project web page - An Analysis of Single-Layer Networks in Unsupervised Feature Learning (NIPS Workshop 2010)

Cited 296 times.**79.6%**

Additional info: K-means (Triangle, 4000 features)

Homepage: Link - Making a Science of Model Search (2012)

Cited 65 times.**79.1%** - Convolutional Deep Belief Networks on CIFAR-10 (2010)

Cited 34 times.**78.9%**

Additional info: 2 layers - Spike-and-Slab Sparse Coding for Unsupervised Feature Discovery (2012)

Cited 11 times.**78.8%** - Pooling-Invariant Image Feature Learning (ARXIV 2012)

Cited 1 times.**78.71%**

Additional info: 1600 codes, learnt using 2x PDL - Semiparametric Latent Variable Models for Guided Representation (2011)

Cited 0 times.**77.9%** - Learning Separable Filters (2012)

Cited 1 times.**76%** - Kernel Descriptors for Visual Recognition (NIPS 2010)

Cited 57 times.**76%**

Additional info: KDES-A

Source code: Project web page - Image Descriptor Learning Using Deep Networks (2010)

Cited 0 times.**75.18%** - Improved Local Coordinate Coding using Local Tangents (ICML 2010)

Cited 39 times.**74.5%**

Additional info: Linear SVM with improved LCC - An Analysis of the Connections Between Layers of Deep Neural Networks (ARXIV 2013)

Cited 0 times.**73.2%**

Additional info: 2 layers (K = 2, random RF) - Tiled convolutional neural networks (NIPS 2010)

Cited 33 times.**73.1%**

Additional info: Deep Tiled CNNs (s=4, with ﬁnetuning)

Source code: Quoc V. Le's web page - Semiparametric Latent Variable Models for Guided Representation (2011)

Cited 0 times.**72.28%**

Additional info: Alpha = 0.01 - Modelling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines (CVPR 2010)

Cited 84 times.**71%**

Additional info: mcRBM-DBN (11025-8192-8192), 3 layers, PCA’d images - On Autoencoders and Score Matching for Energy Based Models (ICML 2011)

Cited 16 times.**65.5%** - Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images (JMLR 2010)

Cited 50 times.**65.3%**

Additional info: 4,096 3-Way, 3 layer, ZCA’d images - Fastfood - Approximating Kernel Expansions in Loglinear Time (ICML 2013)

Cited 3 times.**63.1%** - Learning invariant features through local space contraction (2011)

Cited 2 times.**52.14%**

## Sunday, February 27, 2011

### Table of results for CIFAR-10 dataset

This is a table documenting some of the best results some paper obtained in CIFAR-10 dataset.

Subscribe to:
Post Comments (Atom)

## 6 comments:

This is really great! I've been looking for something like this, and as far as I can see it is pretty complete.

The recently published "Selecting receptive fields in deep networks" http://www.stanford.edu/~acoates

reaches 82% on this dataset.

@rodrigob: Thank you so much for the information. The blog is updated now :)

Thanks for these results. These are really very helpful

This is really great. These results are very useful

I think you missed this paper at NIPS 2012: 90.5% test accuracy.

Post a Comment