ReLU Activation Function

Kajal Pawar

15 days ago

ReLU Activation Function | insideaiml
ReLU Activation Function | insideaiml
ReLU stands for Rectified Linear Unit. ReLU activation function is one of the most used activation functions in the deep learning models. ReLU function is used in almost all the convolutional neural networks or deep learning models.
ReLU Graph | insideaiml
ReLU Graph | insideaiml
The ReLU function  takes the maximum value.
The equation of the ReLU function is given by:
simplest equation of the ReLU function | insideaiml
simplest equation of the ReLU function | insideaiml
ReLU function is not fully interval-derivable, but we can take sub-gradient, as shown in the figure below. Although ReLU is simple, it is an important achievement in recent years for deep learning researchers.
ReLU (Rectified Linear Unit) function | insideaiml
ReLU (Rectified Linear Unit) function | insideaiml
The ReLU (Rectified Linear Unit) function is an activation function that is currently more popular as compared with the sigmoid function and the tanh function.
Recommended blog for you :  Sigmoid Activation Function

How to write a ReLU function and its derivative in python?

So, writing a ReLU function and its derivative is quite easy. Simply we have to define a function for the formula. It is implemented as shown below:
ReLU function
    return max(0, z)
ReLU function derivative
def relu_prime_function(z):
    return 1 if z > 0 else 0

Advantages of tanh function

  • When the input is OK, no gradient saturation problem.
  • The calculation speed is very quickly. The ReLU function has only a direct relationship. Even so forward or backward, much faster than tanh and sigmoid.(tanh and Sigmoid  you need to calculate the object, which will move slowly.)

Disadvantages of tanh function

  • When the input is negative, ReLU is not fully functional, which means when it comes to the wrong number installed, ReLU will die. This problem is also known as the Dead Neurons problem. While you are forward propagation process, not a problem. Some areas are sensitive while others are present unsympathetic. But in the back propagation process, if you enter something negative number, the gradient will be completely zero, with the same problem as sigmoid function and tanh function.
  • We find that the result of ReLU function can be 0 or positive number, which means that ReLU activity is not 0-centric activity.
  • ReLU function can only be used within Hidden layers of a Neural Network Model.
To overcome the Dead Neurons problem of ReLU function another modification was introduced which is called Leaky ReLU. It introduces a small slope to keep the updates alive and overcome the dead neurons problem of ReLU.
Another variant was made from both ReLu and Leaky ReLu called which is known as Maxout function which we will be discussing in details in other articles.

A simple implementation of ReLU activation function in python

# importing libraries
from matplotlib import pyplot
# create rectified linear function
def rectified(x):
    return max(0.0, x)
# define a series of inputs
series_in = [x for x in range(-10, 11)]
# calculate outputs for our inputs
series_out = [rectified(x) for x in series_in]
# line plot of raw inputs to rectified outputs
pyplot.plot(series_in, series_out)
Output: The plot of ReLU activation function is given below
ReLU Avtivation Function plot | insideaiml
ReLU Avtivation Function plot | insideaiml
I hope you enjoyed reading this article and finally, you came to know about ReLU Activation Function.
For more such blogs/courses on data science, machine learning, artificial intelligence and emerging new technologies do visit us at InsideAIML.
Thanks for reading…
Happy Learning…
Recommended course for you :
Recommended blogs for you :

Submit Review