The correct answer is: A. 1 and 3
Ridge regression is a penalized linear regression method that shrinks the coefficients towards zero. This helps to prevent overfitting, which can occur when a model learns the noise in the data instead of the true underlying relationship.
When $\lambda$ is 0, ridge regression is equivalent to ordinary least squares (OLS) regression. This is because the penalty term vanishes when $\lambda=0$, so the only thing that matters is the sum of squared errors.
When $\lambda$ goes to infinity, the penalty term becomes very large, so the coefficients are shrunk towards zero. This can lead to very small coefficients, even if the data is perfectly fit.
Therefore, the only statements that are true about ridge regression are 1 and 3.
Here is a more detailed explanation of each option:
- When $\lambda$ is 0, model works like linear regression model.
This is true because the penalty term vanishes when $\lambda=0$, so the only thing
124.1c-6.3-23.7-24.8-42.3-48.3-48.6C458.8 64 288 64 288 64S117.2 64 74.6 75.5c-23.5 6.3-42 24.9-48.3 48.6-11.4 42.9-11.4 132.3-11.4 132.3s0 89.4 11.4 132.3c6.3 23.7 24.8 41.5 48.3 47.8C117.2 448 288 448 288 448s170.8 0 213.4-11.5c23.5-6.3 42-24.2 48.3-47.8 11.4-42.9 11.4-132.3 11.4-132.3s0-89.4-11.4-132.3zm-317.5 213.5V175.2l142.7 81.2-142.7 81.2z"/> Subscribe on YouTube