The correct answer is D. None of these.
Weak learners are simple models that are not very good at learning on their own. However, when they are combined in an ensemble model, they can often outperform more complex models. This is because the ensemble model can learn from the mistakes of the individual weak learners and produce a more accurate prediction.
The statement “They have low variance and they don’t usually overfit” is not always true. Weak learners can have high variance, which means that they can be very sensitive to small changes in the training data. This can lead to overfitting, which is when a model learns the training data too well and does not generalize well to new data.
The statement “They have high bias, so they can not solve hard learning problems” is also not always true. Weak learners can have low bias, which means that they are not very sensitive to small changes in the training data. This can make them more robust to overfitting and can help them to solve hard learning problems.
In conclusion, weak learners are simple models that can be combined in an ensemble model to produce more accurate predictions. However, they can also have high variance or high bias, which can lead to overfitting or underfitting.