The correct answer is B. They reduce model complexity.
Regularization is a technique used in machine learning to prevent overfitting. Overfitting occurs when a model learns the training data too well and is unable to generalize to new data. Regularization can be achieved by adding a penalty term to the loss function that discourages the model from becoming too complex.
L1 regularization, also known as Lasso, penalizes the model for having large coefficients. This encourages the model to use fewer features and to assign smaller weights to the features that it does use. L2 regularization, also known as Ridge regression, penalizes the model for having large squared coefficients. This encourages the model to use fewer features and to assign smaller weights to the features that it does use.
Both L1 and L2 regularization can be used to reduce model complexity and improve prediction accuracy. However, L1 regularization is more effective at reducing the number of features that the model uses, while L2 regularization is more effective at reducing the size of the coefficients that the model assigns to the features.
The other options are incorrect. Option A is incorrect because regularization does not improve model interpretability. Option C is incorrect because regularization does not maximize prediction accuracy. Option D is incorrect because regularization does not require more training data.