Deep unsupervised learning on graphic processors

Unsupervised deep learning on graphic processors

Unsupervised deep learning on graphic processors

Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, large-scale learning models of cognition would therefore seem to require expertise in programming parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. We have shown how large-scale simulations can be easily performed on a desktop PC by exploiting the processors of low-cost graphic cards (GPUs) without any specific programming effort, thanks to the use of high-level programming routines. We've also showed that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We thus hope that graphic card implementations will pave the way for a widespread use of unsupervised deep learning among cognitive scientists for modeling cognition and behavior.

 

Source code

You can download the complete source code of our deep belief net implementations here (the original model can be found on Geoffrey Hinton's web page):

 

We also provide a series of useful Matlab routines that can be used to analyze deep belief networks, for example by plotting the shape of the receptive fields at different levels of the hierarchy or by performing simple linear read-outs to simulate explicit behavioral tasks. Two examples of deep belief networks trained on the MNIST data set can be downloaded here.

 

In order to try unsupervised deep learning on the prototypical cognitive modeling problem of visual numerosity perception investigated by Stoianov & Zorzi (2012), you can download the complete dataset of visual images here and follow the instructions provided inside the archive.

We provide the dataset (and the adapted source code) used in Di Bono & Zorzi (2013) to investigate word recognition with deep generative models here.

The confusion and similarity matrices produced by the unsupervised deep learning model of printed letters described in Testolin, Stoianov and Zorzi (2017) can be found here. The weights of the single-layer network trained on natural images can be found hereThe full dataset containing uppercase, whitened Latin letters, printed using a variety of different fonts, styles and sizes (MATLAB format) is also available through the Open Science Framework.

You may also find useful a recent tutorial review about how to use deep neural networks to model cognition, and a short perspective about how to apply this framework for modeling impaired neurocognitive functions.

 

If you find this code useful, please cite our work as:

Testolin, A., Stoianov, I., De Filippo De Grazia, M., & Zorzi, M. (2013). Deep unsupervised learning on a desktop PC : A primer for cognitive scientists. Frontiers in Psychology, 4 (251).

Zorzi, M., Testolin, A., & Stoianov, I. (2013). Modeling language and cognition with deep unsupervised learning: a tutorial overview. Frontiers in Psychology, 4 (515).