Feature Learning or Representation Learning serves as a bridge between traditional machine learning techniques and the underlying theory of Deep Learning. It involves leveraging unsupervised learning methods to reduce the dimensionality of the problem while preserving as much information as possible. Subsequently, these extracted data (where the input space is transformed into a lower-dimensional feature space) can be utilized for supervised classification, employing classical machine learning techniques.
The performance of a machine learning algorithm is highly dependent on the choice of input data representation (features). The greatest efforts to achieve a better machine learning algorithm are focused on designing a sequence of transformations of the input data into a format that allows the classification algorithm to maximize its performance.
Various unsupervised learning techniques, some already discussed previously and others that will be covered in the upcoming sections, include
- K-means segmentation
- Hopfield network
- Sparse Coding
- Principal Component Analysis (PCA in section 2.10.1)
- Restricted Boltzmann Machines (RBM in section 4.8.2)
- AutoEncoders (in section 4.8.3)
These techniques allow for the reduction of the problem's dimensionality while striving to retain maximum information, specifically preserving those aspects of the training data that most effectively describe the samples.
Paolo medici
2025-10-22