The correct answer is: TRUE.
SVMs can solve both linear and non-linear problems. In the case of linear problems, the SVM can be used to find a hyperplane that separates the data points into two classes. In the case of non-linear problems, the SVM can be used to find a non-linear decision boundary that separates the data points into two classes.
SVMs are a type of supervised machine learning algorithm that can be used for classification and regression tasks. SVMs are based on the idea of finding a hyperplane that separates the data points into two classes. The hyperplane is chosen such that it maximizes the margin between the data points and the hyperplane. This ensures that the SVM is robust to noise in the data.
SVMs have been shown to be very effective for a variety of classification and regression tasks. They are often used in areas such as spam filtering, image classification, and text classification.
Here is a brief explanation of each option:
- Option A: SVM can solve linear and non-linear problems. This is true because SVMs can be used to find a hyperplane that separates the data points into two classes in the case of linear problems, or a non-linear decision boundary that separates the data points into two classes in the case of non-linear problems.
- Option B: SVM can only solve linear problems. This is false because SVMs can also be used to solve non-linear problems.
I hope this helps! Let me know if you have any other questions.