Javascript required
Skip to content Skip to sidebar Skip to footer

The Activation Function Must Be Continuous and Differentiable

Neural networks have a similar architecture as the human brain consisting of neurons. Here the product inputs(X1, X2) and weights(W1, W2) are summed with bias(b) and finally acted upon by an activation function(f) to give the output(y).

The activation function is the most important factor in a neural network which decided whether or not a neuron will be activated or not and transferred to the next layer. This simply means that it will decide whether the neuron's input to the network is relevant or not in the process of prediction. For this reason, it is also referred to as threshold or transformation for the neurons which can converge the network.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Activation functions help in normalizing the output between 0 to 1 or -1 to 1. It helps in the process of backpropagation due to their differentiable property. During backpropagation, loss function gets updated, and activation function helps the gradient descent curves to achieve their local minima.

In this article, I'll discuss the various types of activation functions present in a neural network.

ALSO READ

Linear

Linear is the most basic activation function, which implies proportional to the input. Equation Y = az, which is similar to the equation of a straight line. Gives a range of activations from -inf to +inf. This type of function is best suited to for simple regression problems, maybe housing price prediction.

Demerits – The derivative of the linear function is the constant(a) thus there's no relation with input. Thus it should not be an ideal choice as it would not be helpful in backpropagation for rectifying the gradient and loss functions.

ReLU

Rectified Linear Unit is the most used activation function in hidden layers of a deep learning model. The formula is pretty simple, if the input is a positive value, then that value is returned otherwise 0. Thus the derivative is also simple, 1 for positive values and 0 otherwise(since the function will be 0 then and treated as constant so derivative will be 0). Thus it solves the vanishing gradient problem. The range is 0 to infinity.

Demerits – Dying ReLU problem or dead activation occurs when the derivative is 0 and weights are not updated. Cannot be used anywhere else than hidden layers.

ELU

Exponential Linear Unit overcomes the problem of dying ReLU. Quite similar to ReLU except for the negative values. This function returns the same value if the value is positive otherwise, it results in alpha(exp(x) – 1), where alpha is a positive constant. The derivative is 1 for positive values and product of alpha and exp(x) for negative values. The Range is 0 to infinity. It is zero centric.

Demerits – ELU has the property of becoming smooth slowly and thus can blow up the activation function greatly. It is computational expensive than ReLU, due to the exponential function present.

LeakyReLU

LeakyReLU is a slight variation of ReLU. For positive values, it is same as ReLU, returns the same input, and for other values, a constant 0.01 with input is provided. This is done to solve the dying ReLu problem. The derivative is 1 for positive and 0.01 otherwise.

Demerit – Due to linearity, it cannot be used in complex problems such as classification.

PReLU

Parameterized Rectified Linear Unit is again a variation of ReLU and LeakyReLU with negative values computed as alpha*input. Unlike Leaky ReLU where the alpha is 0.01 here in PReLU alpha value will be learnt through backpropagation by placing different values and the will thus provide the best learning curve.

Demerits – This is also a linear function so not appropriate for all kinds of problems

Sigmoid

Sigmoid is a non-linear activation function. Also known as the Logistic function. It is continuous and monotonic. The output is normalized in the range 0 to 1. It is differentiable and gives a smooth gradient curve. Sigmoid is mostly used before the output layer in binary classification.

Demerits  – Vanishing gradient problem and not zero centric, which makes optimisation become harder. Often makes the learning slower.

Tanh

Hyperbolic tangent activation function value ranges from -1 to 1, and derivative values lie between 0 to 1. It is zero centric. Performs better than sigmoid. They are used in binary classification for hidden layers.

Demerits – Vanishing gradient problem

Softmax

Softmax activation function returns probabilities of the inputs as output. The probabilities will be used to find out the target class. Final output will be the one with the highest probability. The sum of all these probabilities must be equal to 1. This is mostly used in classification problems, preferably in multiclass classification.

Demerits – Softmax will not work for linearly separable data

Swish

Swish is a kind of ReLU function. It is a self-grated function single it just requires the input and no other parameter. Formula y = x * sigmoid(x). Mostly used in LSTMs. Zero centric and solves the dead activation problem. Has smoothness which helps in generalisation and optimisation.

Demerits – High computational power and only used when the neural network has more than 40 layers.

Softplus

Finding the derivative of 0 is not mathematically possible. Most activation functions have failed at some point due to this problem. It is overcome by softplus activation function. Formula y = ln(1 + exp(x)). It is similar to ReLU. Smoother in nature. Ranges from 0 to infinity.

Demerits – Due to its smoothness and unboundedness nature softplus can blow up the activations to a much greater extent.

birrellwidef1985.blogspot.com

Source: https://analyticsindiamag.com/activation-functions-in-neural-network/