what is Feature scaling done before applying K-Mean algorithm?

in distance calculation it will give the same weights for all features
you always get the same clusters. if you use or dont use feature scaling
in manhattan distance it is an important step but in euclidian it is not
none of these

The correct answer is: A. in distance calculation it will give the same weights for all features.

Feature scaling is the process of normalizing the features of a dataset so that they have a similar scale. This is done by subtracting the mean of each feature and then dividing by the standard deviation of each feature.

The reason why feature scaling is important is because the K-means algorithm works by finding clusters of data points that are close to each other. If the features are not scaled, then the algorithm will be biased towards features that have a large range of values. This is because the distance between two data points will be larger if the features have a large range of values.

By scaling the features, we can ensure that the algorithm is not biased towards any particular feature. This will result in more accurate clusters.

Here is a brief explanation of each option:

  • Option A: This is the correct answer. Feature scaling will give the same weights for all features in the distance calculation. This is because the mean and standard deviation of each feature will be the same after scaling.
  • Option B: This is incorrect. The clusters that are found by the K-means algorithm will be different if feature scaling is used or not. This is because the distance between two data points will be different if the features are not scaled.
  • Option C: This is incorrect. Feature scaling is important for both Manhattan and Euclidean distance. This is because the distance between two data points will be different if the features are not scaled.
  • Option D: This is incorrect. Feature scaling is an important step in the K-means algorithm.
Exit mobile version