Just got back home from AISTATS (Artificial Intelligence and Statistics). The conference was really interesting (more so than NIPS) and it’s unfortunate that it is only every two years. Some of the invited talks were way over my head, but I learned a lot from other people’s work and got new ideas …

Some of the coolest papers were (incomplete list and in no particular order; I need to organize my notes ðŸ™‚ But there were way more papers of interest to me than at NIPS):

**Nonlinear Dimensionality Reduction as Information Retrieval**- Venna Jarkko and Samuel Kaski

**Fast Low-Rank Semidefinite Programming for Embedding and Clustering**- Brian Kulis, Arun Surendran, and John C. Platt

**Local and global sparse Gaussian process approximations**- Edward Snelson, Zoubin Ghahramani

**A fast algorithm for learning large scale preference relations**- Vikas Raykar, Ramani Duraiswami, and Balaji Krishnapuram

**Deep Belief Networks**- Ruslan Salakhutdinov and Geoff Hinton

**Large-Margin Classification in Banach Spaces**- Ricky Der and Daniel Lee

One thing that couldn’t help but notice was how much research is now focusing on Semi-Definite Programs, either for dimensionality reduction or other purposes. Yet, there are not many efficient ways to compute SDPs. One paper presented a method based on quasi-Newton gradient descent, but it’s probably not good enough yet for large problems.

Other interesting papers I saw was about the unsupervised deep belief nets that learns a structure of the data which results in an interesting performance boost. The authors train a deep belief net (unsupervised) on the data and then train classifiers on the output; although all the results were compared to only linear techniques, they showed some impressive results. This reminded me of a similar idea I had a while ago that I never got to work; I tried to use label propagation methods to approximate a kernel matrix usable for SVMs and the like. It never worked, because my algorithm caused the SVMs to always overfit (despite being unsupervised – it took me a while to realize that doing something unsupervised is no guarantee that you won’t overfit your data). I’ll investigate some day what made all the difference in this case…

Another interesting bit was that approximating the Matrix Inverse by low-rank approximations leads to significant loss of accuracy for Gaussian Processes Error bars. This should be interesting for further research in the speedups for these and other algorithms that require a matrix inversion (e.g. semi-supervised label propagation algorithms).