Deep learning? I’ve heard of it
Deep learning is definitely a hot issue in these days. It suddenly appeared and started devouring classical image processing area. It is really good at object recognition and face recognition. For those tasks, Deep learning performs better than classical image processing algorithms, and even better than humans.
Currently, Deep learning is far beyond the image classification. A recent work extracted artistic style of a painting and applied it to another painting by using Deep learning.
What is it?
The key idea of Deep learning is imitating neural networks in animals. Eighty-six billion neurons are in a brain of us, and each of them does simple operation. Each neuron has input cables (dendrite) and output cables (axon), and they are connected to other neurons’ axons and dendrites, respectively. What a neuron do is activate its axon if the activation level of its dendrites is high enough. The activation of axon will be delivered to connected dendrite through a synapse.
The recent spotlight on Deep learning has been less than ten years, but the history of it starts from 1958. In 1958, F. Rosenblatt brought an idea of a perceptron which calculates weighted sum of inputs and activate the output when the sum is high enough. In 1986, A concept of multilayer perceptrons is introduced. The big progress is that the perceptron network is trained by back-propagating error signal. A supervisor compares the output with the expected output and reduce or increase weights of wrong or correct edges. (We are going to talk about the backpropagation details in later posts).
The idea of multilayer perceptron is cool; however, the computing power of at that time was not sufficient for complicated networks. As a result, the researchers focused on shallow networks which have only one or zero hidden layer (layers between the input layer and the output layer). Exploration on Deep network mainly started from 2006, with assist of GPGPU acceleration.