Neural Networks Breakdown Conclusion
This is going to be the final part to the neural networks breakdown series. We have discussed all the main parts of what constitutes an artificial neural network. The nodes, the layers, activation function, cost/loss function, forward propagation and the main part, which is the backpropagation. One thing which was missed out was the concept of the gradient descent. The adjustment of weights and bias is done through the gradient descent method, which was explained throughout the last article. These parts allow for gradual learning to happen.
This series was just to outline the basic concepts of how different kinds of neural networks work. There are different variations of neural networks which we use for different applications.
Applications can range from classification problems, meaning a neural network being able to tell what class/type a set of data points belongs to, language translation, which is pretty straightforward, computer vision problems and many more. There are changes which are made to the implementation of the neural network which gives the different variations of it.
Some of the variations of the neural networks are the normal feedforward neural network (the one which was explained throughout the series), recurrent neural networks (helps in time series data) and convolutional neural networks (used for image problems). These are the most common ones which are put to use in most applications which we use.
Knowing all this, trying to implement these concepts is the next step. The theory is not too difficult to follow, but implementation is definitely one.
- Shubham Anuraj, 00:59, 23 February, 2019