Saturday, October 15, 2011

Why I would buy an iPhone 4S (Or any Apple products)

As a gadget lover, I started using smartphone since the days of Windows Mobile with a Sony Ericcson XPERIA X1. It was certainly a very capable smartphone and it is largely usable (well, of course it does). Needless to say, I was utterly blown away when I first gave Android a try. The fact that you can do away with desktop sync, the way you can do away with worrying about needing to copy your old contacts information from old phone to new phone manually is simply magnificent. At that time, I thought no other phones came close, not even the mighty iPhone.


However, one might ask, why is Android the way it is? People who follows the history of iPhone and Android will tell you that they are both introduced on the same year (2007). Well, to be precise, iPhone is first introduced on January 2007, while Android is announced on the same year somewhere in November. If you wanted to know how Android look at that time (2007), this link would shed some light. Now, you would say it doesn't look much like anything like the Android we are used to nowadays. But that was how Android look at that time.


Naturally, the question of whether Android copied iOS or the reverse is debatable. The reality is that both copies stuff from one another. An example of iPhone (as in iOS) copies Android would be the notification center in iOS 5 (the favorite example by Android loyalist). However somewhere in the year 2008/9, Android makes a radical change, abandoning most of the UI before that and fully embrace touch screen, well, just like iOS. By now, I guess it is fair to say that Android borrows quite a lot of concept from iOS. Without iOS, Android *might* have looked different from what it is today.


That is not to say that Android is just like iOS. In fact, Android quite differ from iOS in many ways. For one, the Android provides more flexibility (thus, more fragmented), which many argued that iOS is basically a walled garden system (for better or worse). Apple did this to control the quality of the apps and prevent vendor/telcos from fragmenting the platform by implementing their own UI spin (think HTC Sense UI, etc). Without going into the whole Android vs iOS debate, I guess most readers would recognize their differences and came to the conclusion that comparing them is like comparing Apples and Oranges (Even though both of them are basically fruits and they are very good for you).


However, I have a confession to made. I have respect for people who chooses Android. After all, I did it myself by going with Samsung Galaxy S. The problem is that there is a faction of die-hard Android loyalist that chooses Android because it made them look geekier/nerdier or the fact that it might make them appear smarter than the rest of iPhone-rs (which is most of the people you saw on the streets). While it might be true that they are sort of smarter (or financially poorer, depending on how you look), I believe there is a faction of people who chooses Apple because they appreciate it for what it is. It may be that they are a designer themselves and therefore have better taste, or they knows how to appreciate good UI/UX, or it may be that they just wanted a really good phone. This is something Android loyalist just can't understand. They think that it only attributes to Steve Jobs's reality distortion field or the fact that he's a really good sales man (Even though, that is undoubtedly true).


Personally, as a designer/coder, I have always thought that I had some good design sense. I can appreciate good UI/UX (User eXperience) when I see one. And I strive to design good UI/UX as much as I can. From my experience with using Android and iOS, iOS has slightly better overall UX than Android. Even though both are better than Windows Mobile (I reserved my judgement on Windows Phone 7 until I have used it myself). As such, I certainly can envision myself using both iPhone and Android - most people nowadays carry 2 phones anyway.


In conclusion, there will always be people buying an Apple products just because they wanted to look cool or they simply wanted to brag about it (read: people on Facebook who tells everyone about his cool new phone). However, I have no more respect for people who wanted an Android to look smarter. In the end, the smartest people buys and use whichever phones/gadgets that does what they wanted. There is no need to compare whose balls are bigger - they are all balls.


TL/DR: It's just a fucking phone. Get over it.

Sunday, August 14, 2011

Table of results for COIL-100 dataset

This is a table documenting some of the best results some paper obtained in COIL-100 dataset.

COIL-100 is a dataset containing images of 100 objects, each with 72 views. Here's a link to the official web site for COIL-100.
  1. Nearest Prime Simplicial Complex for Object Recognition (2011)
    Cited 0 times. 97.19%
  2. Multiple-View Object Recognition in Smart Camera Networks (2011)
    Cited 0 times. 95%
  3. Deep Learning from Temporal Coherence in Video (2009)
    Cited 30 times. 92.5%
  4. Deep Learning of Invariant Features via Simulated Fixations in Video (2012)
    Cited 0 times. 82%
    Additional info: Trained also with 
    van Hataren videos (unrelated to COIL-100) obtained 87%

Friday, July 15, 2011

The Internet is its biggest enemy

We lived in a new, unprecedented era. An era where Internet is considered by some the most powerful weapon against an oppressive regime. Like Wael Ghonim said, "If you want to free a society, just give them Internet access". If you wanted examples of this, you need only to look at what happened in Egypt, Libya and Syria. But why, you ask. After all, the Internet is merely a communication tool. We've had communication tools for centuries. True. However, you have to remember that communication in the olden days are slow and unreliable. Today we communicate almost at the speed of light and while traditional media like newspaper, magazines and news program can be censored, on the Internet, the more you try to censor something, the more it could backfire, provided the situation is right. This phenomenon is otherwise known as the "streisand effect".


Today, many scandals are being exposed because of Internet. And as companies like Sony, Apple and Microsoft knows all too well, once a bad story turns up on the Internet, there is no going back. You can't take something down on the Internet. Took down one web page and a thousand sprung up. Once an image is tainted, it is really difficult to repair. Of course, this is mostly good for consumers.


In this day and age, it is getting harder to keep secrets. Especially so for secrets that has a profound impact for citizens of a country or consumers. The rise of whistleblower sites like wikileaks would not have been possible without the Internet and an anonymous communication tool such as Tor that allows you to surf the Internet anonymously. Additionally tool like Tor disable an oppressive regime's ability to censor anything on the Internet, not unless the Internet is shut down completely. Even so, the U.S. is working on what they call "Internet in a box" as a way to counter that.


Hence, by now, you must be tempted to think that this is a great era. A great era for freedom. A great era for society that wants to be just and fair for all. However, as great and wonderful the Internet is, it turns out to be its own biggest enemy. Because of availability of anonymity on the Internet, the Internet is filled with rumors, lies and propaganda propagated by irresponsible parties. It is becoming more and more difficult to believe something you read on the Internet and harder still to separate the truths from the half-truths. It is for this reason that most people (older generation perhaps) these days tend to believe what they read on newspaper and/or TV rather than something being forwarded by E-mail, for example.


To give an example, if a particular news that negatively portrays the leader of a particular country has surfaced on the Internet, even if it's the truth, could be rendered (mostly) useless if the people was made to believe that the it was artificially made up by the opposition with ill intentions. The situation is made more futile if no one claims responsibility for spreading the news in the first place. When there is anonymity, there would be no accountability. And the truth is there are people out there that would do all sorts of crazy/illegal things if total anonymity is guaranteed - that is, he/she would not be able to be held accountable, for whatever he/she did. However, in this case, anonymity is the reason the news surfaced at all, because if you know something that could potentially threaten some very powerful people, it is very likely that you would preferred to be anonymous for obvious reasons.


Henceforth we are faced with a difficult situation: we can keep the Internet anonymous and lose accountability, or destroy anonymity of the Internet and lose freedom on the Internet. It would seems that we just can't have it all.

Sunday, February 27, 2011

Table of results for CIFAR-10 dataset

This is a table documenting some of the best results some paper obtained in CIFAR-10 dataset.
  1. Spatially-sparse convolutional neural networks (ARXIV 2014)
    Cited 12 times. 93.72%
    Additional info: DeepCNiN(5,300) With data augmentation and Network-in-Network layers
  2. Deep Residual Learning for Image Recognition (ARXIV 2015)
    Cited 1 time. 93.57%
    Additional info: ResNet 110 layers, 1.7 million parameters
    Link to slides
  3. Deeply supervised nets (ARXIV 2014)
    Cited 66 times. 91.78%
    Additional info: With data augmentation. Without data augmentation, it's 90.31%
    Link to paper's project page
  4. Network In Network (ARXIV 2013)
    Cited 4 times. 91.19%
    Additional info: NIN + Dropout + Data Augmentation. Without data augmentation, it's 89.59%
    Link to source code at github
  5. Regularization of Neural Networks using DropConnect (ICML 2013)
    Cited 0 times. 90.68%
    Additional info: Voting with 12 DropConnect networks. With data augmentation.
    Link to project page (Contains source code etc)
    Link to Supplementary Material
    Link to slides
  6. Maxout networks (ICML 2013)
    Cited 90 times. 90.62%
    Additional info: Consists of 3 convolution maxout layers & a fully connected softmax layer, training data is augmented with translations & horizontal reflections.
  7. Link to project page (source code included)
  8. Multi-Column Deep Neural Networks for Image Classification (CVPR 2012)
    Cited 170 times. 88.79%
    Link to technical Report
    Link to Supplemental material
  9. Deep Learning using Linear Support Vector Machines (ARXIV 2013)
    Cited 2 times. 88.1%
  10. Practical Bayesian Optimization of Machine Learning Algorithms (NIPS 2012)
    Cited 121 times. 85.02%
    Additional info: With data augmented with horizontal reflections and translations, 90.5% accuracy on  test set is achieved.
  11. Least Squares Revisited: Scalable Approaches for Multi-class Prediction (ARXIV 2013)
    Cited 0 times. 85%
  12. Stochastic Pooling for Regularization of Deep Convolutional Neural Networks (ICLR 2013)
    Cited 34 times. 84.87%
    Additional info: Stochastic-100 Pooling
    Link to paper's project page
  13. Improving neural networks by preventing co-adaptation of feature detectors (2012)
    Cited 261 times. 84.4%
  14. Understanding Deep Architectures using Recursive Convolutional Network (ARXIV 2013)
    Cited 4 times. 84%
  15. Discriminative Learning of Sum-Product Networks (NIPS 2012)
    Cited 33 times. 83.96%
  16. Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image Features (2012)
    Cited 95 times. 83.11%
  17. Learning Invariant Representations with Local Transformations (2012)
    Cited 12 times. 82.2%
    Additional info: TIOMP-1/T (combined, K= 4,000)
  18. Learning Feature Representations with K-means (NNTOT 2012)
    Cited 35 times. 82%
  19. Selecting Receptive Fields in Deep Networks (NIPS 2011)
    Cited 51 times. 82%
  20. The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization (ICML 2011)
    Cited 202 times. 81.5%
    Source code: Adam Coates's web page
  21. High-Performance Neural Networks for Visual Object Classification (2011)
    Cited 26 times. 80.49%
  22. Object Recognition with Hierarchical Kernel Descriptors (CVPR 2011)
    Cited 55 times. 80%
    Source code: Project web page
  23. An Analysis of Single-Layer Networks in Unsupervised Feature Learning (NIPS Workshop 2010)
    Cited 296 times. 79.6%
    Additional info: K-means (Triangle, 4000 features)
    Homepage: Link
  24. Making a Science of Model Search (2012)
    Cited 65 times. 79.1%
  25. Convolutional Deep Belief Networks on CIFAR-10 (2010)
    Cited 34 times. 78.9%
    Additional info: 2 layers
  26. Spike-and-Slab Sparse Coding for Unsupervised Feature Discovery (2012)
    Cited 11 times. 78.8%
  27. Pooling-Invariant Image Feature Learning (ARXIV 2012)
    Cited 1 times.  78.71%
    Additional info: 1600 codes, learnt using 2x PDL
  28. Semiparametric Latent Variable Models for Guided Representation (2011)
    Cited 0 times. 77.9%
  29. Learning Separable Filters (2012)
    Cited 1 times. 76%
  30. Kernel Descriptors for Visual Recognition (NIPS 2010)
    Cited 57 times. 76%
    Additional info: KDES-A
    Source code: Project web page
  31. Image Descriptor Learning Using Deep Networks (2010)
    Cited 0 times. 75.18%
  32. Improved Local Coordinate Coding using Local Tangents (ICML 2010)
    Cited 39 times. 74.5%
    Additional info: Linear SVM with improved LCC
  33. An Analysis of the Connections Between Layers of Deep Neural Networks (ARXIV 2013)
    Cited 0 times. 73.2%
    Additional info: 
    2 layers (K = 2, random RF)
  34. Tiled convolutional neural networks (NIPS 2010)
    Cited 33 times. 73.1%
    Additional info: Deep Tiled CNNs (s=4, with finetuning)
    Source code: 
    Quoc V. Le's web page
  35. Semiparametric Latent Variable Models for Guided Representation (2011)
    Cited 0 times. 72.28%
    Additional info: Alpha = 0.01
  36. Modelling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines (CVPR 2010)
    Cited 84 times. 71%
    Additional info: mcRBM-DBN (11025-8192-8192), 3 layers, PCA’d images
  37. On Autoencoders and Score Matching for Energy Based Models (ICML 2011)
    Cited 16 times. 65.5%
  38. Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images (JMLR 2010)
    Cited 50 times. 65.3%
    Additional info: 4,096 3-Way, 3 layer, ZCA’d images
  39. Fastfood - Approximating Kernel Expansions in Loglinear Time (ICML 2013)
    Cited 3 times. 63.1%
  40. Learning invariant features through local space contraction (2011)
    Cited 2 times. 52.14%

Wednesday, January 05, 2011

My predictions for the year 2011

Well, everyone is posting their predictions for 2011. So for the first time, why not I do one just for the sake of it.


  1. 2011 will be the year certain U.S. companies (Facebook?) enters China. Or if not China, maybe some other country in Asia (Japan?). (Sorry, my vision is cloudy for this one)
  2. 2011 will be the year Google do social.
  3. 2011 will be a bad year for many countries. Many European countries will be entering (or show signs of) economic recession.
  4. 2011 will be the year of tablets. Many new models will be launched, as this is the year of Android's new tablet-ready OS Honeycomb. Thus, Android will be gaining ground on the tablet market share, but will not displace iPad as the king of Tablet.
  5. 2011 will be the year of the whistle blower. More new alternatives to wikileaks will be launched and make the news, including new shocking leaks.
  6. 2011 will be year of peace. There will not be war but more peace talks. There will probably be peace talk: China-US and South Korea-North Korea.
So, to sum it up, keywords to watch out for: China, Social, Tablet, Recession, Wikileaks, Peace.