Convolutional Neural Network
Document Type:Coursework
Subject Area:Technology
The hidden layer could be one or several layers. This can easily be illustrated as follows. A regular 3-layer neural network (Yann, 2016) The neural networks help in establishing hard to form or complex relations that a programmer might not be able to easily establish. The networks help the machine in identifying such complex relations and continually learning from previous experiences. The neutral networks have been around since the 1940s but they have recently gained popularity due to emergence of back propagation that allows the networks to change their hidden neurons layers in circumstances where outcome doesn’t correspond to the desired output (Dreyfus, 1990). CNN finds common applications in tasks such as image processing and other cognitive tasks such as natural language learning. The CNN represents a complex example of deep learning comparable to information processing by the human brain (Krizhevsky, Sutskever & Geoffrey, 2012).
The convolutional network is made up of one or several convolutional layers followed by a singular or several connected layers. The CNN makeup is such that it easily takes advantage of two dimensional structures (2D input) such as images. The Convolutional neural networks came along as a result of research conducted on the mammalian visual cortex that showed how mammals perceive objects. Each layer transforms the activations it receives to the other layer using a differentiable function. The layers are formed from an interconnection of nodes containing an activation function. This can be illustrated as below in fig. Layers in Convolutional networks (Artificial Neural Networks, 2018) The convolutional networks are made up of three main layers. This are: convolutional layer, fully connected layer and pooling layer. The Input: This will be the 30*30*3 which will hold the raw pixel values, height, width and the RGB channels.
The convolutional Layer: the image viewed as a matrix is presented to the network by the inputs. The software then selects a start point for example the top left corner and selects a smaller portion to act as the filter and is known as the base neuron or as the core. The filter then does the convolution as it moves along the image providing a result at each stage say every movement along the length being a stage (Schmidhuber, 2015). It computes the output of all the neurons attached in the input to any regions. This merges the weights and all other characteristics defining a class and makes a decision based on linear separability of the samples offered. The two graphs below simplify the understanding (Zhang et al. Linear separability (Ciresan et al.
Activation functions simply define the output of a particular node whenever that node is presented with a particular set of inputs. In artificial networks, when a node receives an input, it carefully considers it, sums it up with the biases set and then decides whether to set its status as activated or to remain as not activated (Ciresan et al. Retrieved 2018-02-20. Avinash, S. V. The Theory of Everything. Ciresan, D. Masci, J. Schmidhuber, J. Multi-column deep neural network for traffic sign classification". Neural Networks. Selected Papers from IJCNN 2011. Artificial neural networks, back propagation, and the Kelley-Bryson gradient procedure". Journal of Guidance, Control, and Dynamics. Bibcode:1990JGCD. D. doi:10. Lawrence, S. Giles, C. L. Ah Chung, T. Back. Pp 958. ieeecomputersociety. org/10. ICDAR. Schmidhuber, J. Anguelov, D. Rabinovich (2015). Going deeper with convolutions.
Yann, L. Slides on Deep Learning Online Zhang, W.
From $10 to earn access
Only on Studyloop
Original template
Downloadable
Similar Documents