What is/are true about ridge regression? 1. When lambda is 0, model works like linear regression model 2. When lambda is 0, model doesn’t work like linear regression model 3. When lambda goes to infinity, we get very, very small coefficients approaching 04. When lambda goes to infinity, we get very, very large coefficients approaching infinity.

1 and 3
1 and 4
2 and 3
2 and 4

The correct answer is: A. 1 and 3

Ridge regression is a penalized linear regression method that shrinks the coefficients towards zero. This helps to prevent overfitting, which can occur when a model learns the noise in the data instead of the true underlying relationship.

When $\lambda$ is 0, ridge regression is equivalent to ordinary least squares (OLS) regression. This is because the penalty term vanishes when $\lambda=0$, so the only thing that matters is the sum of squared errors.

When $\lambda$ goes to infinity, the penalty term becomes very large, so the coefficients are shrunk towards zero. This can lead to very small coefficients, even if the data is perfectly fit.

Therefore, the only statements that are true about ridge regression are 1 and 3.


Here is a more detailed explanation of each option:

  1. When $\lambda$ is 0, model works like linear regression model.

This is true because the penalty term vanishes when $\lambda=0$, so the only thing that matters is the sum of squared errors. This is the same as ordinary least squares (OLS) regression.

  1. When $\lambda$ is 0, model doesn’t work like linear regression model.

This is not true. When $\lambda=0$, ridge regression is equivalent to OLS regression.

  1. When lambda goes to infinity, we get very, very small coefficients approaching 0.

This is true because the penalty term becomes very large when $\lambda$ goes to infinity, so the coefficients are shrunk towards zero.

  1. When lambda goes to infinity, we get very, very large coefficients approaching infinity.

This is not true. When $\lambda$ goes to infinity, the coefficients are shrunk towards zero. This can lead to very small coefficients, even if the data is perfectly fit.

Exit mobile version