The network that involves backward links from output to the input and hidden layers is called _________ A. Self organizing maps B. Perceptrons C. Recurrent neural network D. Multi layered perceptron

[amp_mcq option1=”Self organizing maps” option2=”Perceptrons” option3=”Recurrent neural network” option4=”Multi layered perceptron” correct=”option3″]

The correct answer is C. Recurrent neural network.

A recurrent neural network (RNN) is a type of artificial neural network that can process sequential data. RNNs are often used for tasks such as natural language processing and speech recognition.

In an RNN, the output of each neuron is fed back into the input of the same neuron, as well as the inputs of other neurons in the network. This allows the RNN to learn long-term dependencies in the data.

Self-organizing maps (SOMs) are a type of artificial neural network that can be used for data visualization and clustering. SOMs are often used for tasks such as image recognition and pattern classification.

Perceptrons are a type of artificial neural network that can be used for classification and regression tasks. Perceptrons are often used for tasks such as spam filtering and fraud detection.

Multi-layered perceptrons (MLPs) are a type of artificial neural network that can be used for classification and regression tasks. MLPs are often used for tasks such as image recognition and natural language processing.

In conclusion, the network that involves backward links from output to the input and hidden layers is called a recurrent neural network (RNN).