Artificial Intelligence: What are Neural Networks?

Nivan Gujral
Level Up Coding
Published in
6 min readFeb 22, 2020

--

Have you ever used a computer, smartphone, or any digital device? Then most likely, you have experienced Artificial Intelligence. Artificial Intelligence (AI) is where a computer demonstrates behaviors associated with human intelligence. AI is a vast field that is found in impacting everything from agriculture to healthcare. At the heart of AI, there are Neural Networks. They can easily find patterns in data which would be harder for a human to do. They also set the basis for other models to branch off and be used for even more specific analysis.

Parts of a Neural Network

In a Neural Network, there are three different parts which are the Input, Hidden Layers, and the Output. The circles in green represent the inputs, the blue circles represent the Hidden Layers, and the orange circles represent the outputs. The inputs are the values that are given to the Neural Network for it to complete a function. The hidden layers complete the functions by using some of the values which are given by the inputs. There can be multiple hidden layers in one Neural Network which depends on the objective which it is made for. The output is the end result after all of the functions have taken place.

The lines that connect the inputs to the Hidden Layers are called Synapses. Each Synapse is assigned a weight that decides which signal is important. Let’s imagine that the input and the hidden layer are train stations, the Synapses are train tracks, and the signal is the train. The train goes on the train tracks to get to from one station to another station. On the track, there are a lot of trains going through which has caused a backup to occur. To make the trains start flowing smoothly again there is a weight assigned which allows the important trains to go through first and then the non-important ones.

Activation Functions

Each circle in the hidden layer represents a single neuron. In the neuron, all the values that are linked to that neuron are added by their weighted values together. After all the values are added to each other, an activation function takes place.

One type of activation function is called the Threshold Function. On the X-axis there is the sum of the weighted values and on the Y-axis there are the values from 0 to 1. The way this function works is by giving an output of 0 if the sum of the weighted values is less than 0 and gives an output of 1 if the sum of the values is greater than or equal to 1. This function is most commonly used for yes or no circumstances.

Another type of activation function is called the Sigmoid Function. Just like the Threshold Function on the X-axis, there is the sum of the weighted values and on the Y-axis there are the values from 0 to 1. This function has more of a gradual progression, unlike the Threshold function which goes from one to another right away. If the sum of the values is less than 0, then the values will be approximated to 0. If the sum of the values is greater than 1, then the value will be approximated to 1. This function is most commonly used to try and predict probability.

The third type of activation function is called the Rectifier Function. Just like the Threshold Function and the Sigmoid Function on the X-axis, there is the sum of the weighted values, but the Y-axis does not have a limit and continues to go up. The function starts at 0 and gradually progresses as the X value increases. This function is the most popular activation functions used in Neural Networks.

The last type of activation function is called the Hyperbolic Tangent Function. Just like the previous functions on the X-axis, there is the sum of the weighted values, but on this function, the Y-axis goes from both 1 to 0 and 0 to -1. This function is just like the Sigmoid function were it has a gradual progression but this function also goes into the negatives as well.

How to train a Neural Network?

To train a Neural Network, there are two outputs which are the actual value and the output value. The output value is the value that the Neural Network gives out based on the input values. The actual value is that value that should come out from the Neural Network. After the values go through the Neural Network and get the output value, then it figures out the cost function.

The cost function is found by 1/2 of the squared difference of the output value and the actual value. The cost function tells the error that the Neural Network made. The Neural Network needs to lower the value of the cost function because that means the Neural Network will be more accurate. The cost function will be fed back into the Neural Network and then the weights will be updated. This process repeats until the cost function is 0.

Gradient Descent

Gradient Descent is a way to lower the cost function to find the correct weight. Let us imagine a ball at the position of where the current weight is at. It works by starting at one random weight and then finds the slope at that point. If the slope is negative, then the graph would be going downhill and then the ball would roll down. When the ball stops at a point it finds the slope again and repeats the process until the ball reaches the center. This is basically how Gradient Descent finds out the right weight that would work.

Neural Networks the foundation of AI

Neural Networks take in inputs that they then process through an activation function which is specifically used for the task at hand. For the Neural Network to train it tries to lower the cost function all the way down to 0 with methods like Gradient Descent. Neural Networks help complete many takes that are related to finding colorations in data. They are the building blocks of where other, more specific training models are built off of such as RNNs and CNNs. RNNs are used to find colorations in data over time, while CNNs are used to find colorations in images. Hence Neural Networks form a key foundation of Artificial Intelligence.

Send me an email at nivangujral@gmail.com if you would like to further discuss this article or just talk.

--

--