The correct answer is A. Data Partitioning.
Data partitioning is the process of splitting a dataset into two or more subsets. The most common type of data partitioning is to split the dataset into a training set and a test set. The training set is used to train the machine learning model, and the test set is used to evaluate the model’s performance.
Data sampling is the process of selecting a subset of data from a larger dataset. Data sampling can be used to reduce the size of a dataset, to improve the performance of machine learning algorithms, or to make the dataset more representative of the population from which it was drawn.
Data splitting is the process of dividing a dataset into two or more subsets. Data splitting can be used to improve the performance of machine learning algorithms by reducing overfitting. Overfitting occurs when a machine learning model learns the training data too well and is not able to generalize to new data.
Data shuffling is the process of randomly rearranging the order of data points in a dataset. Data shuffling can be used to improve the performance of machine learning algorithms by preventing the model from learning the order of the data points.
In conclusion, the process of splitting a dataset into a training set and a test set used for machine learning is called data partitioning.