Understanding Deep learning!
Deep Learning is state of algorithms that attempts to model high level abstract
in data by using multiple layers of network, It is called deep-learning
because more than one hidden network layer is used in the whole network. This implies that deep learning is considered as Technical term.
Deep learning is often considered as Brahma aastra to kill every problem in the world but that in not certainly true, Of-course Brahma-Astra is amazing tool but Deep-learning is not Brahmastra but still we care about it.
For newbie learning one can use “MNIST Data-Set” to fit and predict, in general the goal of deep-learning is to take input from low level and generate higher level abstraction through the composition of layers, But before doing that we need to understand the various parts of Deep Learning algorithm.
This is also called visible layer, this layer contains an input
node for each of the entries in our feature vector.
For example, in the MNIST dataset each image is 28 x 28 pixels. If we use the raw pixel intensities for the images, our feature vector would be of length 28 x 28 = 784, thus there would be 784 nodes in the input layer.
From there nodes connecet to series of hidden layers, In the most simple terms, each hidden layer is an unsupervised Restricted
Boltzmann Machine where the output of each RBM in the hidden layer sequence is used as input to the next.The final hidden layer then connects to an output layer.
This layer contains the probabilities of each class label. For example, in our MNIST dataset we have 10 possible class labels (one for each of the digits 1-9). The output node that produces the largest probability is chosen as the overall classification.
This is quite introductory level of information about Deep Learning and understanding the working of Neural Networks, for implementation of Coding part stay tuned or you can also reach at: this link