Member-only story
Linear Discriminant Analysis (LDA), Maximum Class Separation!
Today we are going to talk about a supervised dimensionality reduction method, with the purpose of maximizing the distance between classes!
Linear Discriminant Analysis is all about finding a lower-dimensional space, where to project your data unto in order to provide more meaningful data for your algorithm.
LDA could also be used for classification, but in this article, I’ll mainly focus on LDA for the purpose of data transformation and dimensionality reduction.
As mentioned above, Fisher’s Linear Discriminant is about maximizing the class separation, hence making it a supervised learning problem. Unlike PCA, which is an unsupervised dimensionality reduction method for preserving maximum variance.
LDA accomplish this goal by maximizing the between-class variance denoted S_b while minimizing the within-class variance denoted S_W. The reason for minimizing the variance within the respective classes, is to narrow the span of the class in feature space, such that the projected features are more representative.