The correct answer is: C. build models with alternative subsets of the training data several times.
Bootstrapping is a statistical technique that allows us to estimate the sampling distribution of a statistic by repeatedly sampling with replacement from the original data set. This means that we can build multiple models, each with a different subset of the training data, and then compare the results of these models to get a better understanding of the uncertainty in our estimates.
Option A is incorrect because bootstrapping does not allow us to choose the same training instance several times. In fact, each time we sample with replacement, we are guaranteed to have a different set of training instances.
Option B is incorrect because bootstrapping does not allow us to choose the same test set instance several times. The test set is used to evaluate the performance of the model, and it is important that the test set is not biased by the training data.
Option D is incorrect because bootstrapping does not allow us to test a model with alternative subsets of the test data several times. The test set is used to evaluate the performance of the model, and it is important that the test set is only used once.
In conclusion, bootstrapping allows us to build models with alternative subsets of the training data several times. This allows us to estimate the sampling distribution of a statistic and get a better understanding of the uncertainty in our estimates.