The correct answer is: A. 1, 2, 3, 4, 5
Gradient descent is an iterative optimization algorithm for finding the minimum of a function. It is used in machine learning to train neural networks. The steps involved in gradient descent are as follows:
- Initialize the weights and biases of the network randomly.
- Pass an input through the network and get values from the output layer.
- Calculate the error between the actual value and the predicted value.
- Go to each neuron which contributes to the error and change its respective values to reduce the error.
- Repeat steps 2-4 until the error is minimized.
The following is a brief explanation of each step:
- Initialize the weights and biases of the network randomly. This is done to ensure that the network starts from a random point in the search space.
- Pass an input through the network and get values from the output layer. This is done to calculate the error between the actual value and the predicted value.
- Calculate the error between the actual value and the predicted value. This is done to determine how much the network needs to be updated.
- Go to each neuron which contributes to the error and change its respective values to reduce the error. This is done by updating the weights and biases of the network.
- Repeat steps 2-4 until the error is minimized. This is done until the network converges to a minimum value for the error.