A Friendly Introduction to PCA
After years of almost, but not quite fully understanding PCA, here is my attempt to explain it fully, hopefully leaving some of the magic intact.
principal-component-analysis unsupervised-learning dimensionality-reduction article

We will work from the outside in: we will view PCA first as a way of finding a smaller representation of a dataset. This is a typical machine learning problem: find a compressed representation of the data such that the reconstructions are as close to the original as possible. This is a simple view of PCA, an we’ll be able to compute it with nothing more than gradient descent with a few extra tricks for satisfying constraints.

Don't forget to tag @pbloem in your comment, otherwise they may not be notified.

Authors community post
Postdoc at University of Amsterdam.
Share this project
Similar projects
Machine Learning for Humans, Part 3: Unsupervised Learning
Clustering and dimensionality reduction: k-means clustering, hierarchical clustering, principal component analysis (PCA), singular value decomposition ...
Principal Component Analysis for Dimensionality Reduction
This article will focus on a walkthrough for principal component analysis in Python.
The Beginner's Guide to Dimensionality Reduction
Explore the methods that data scientists use to visualize high-dimensional data.
GANSpace: Discovering Interpretable GAN Controls
This paper describes a simple technique to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image synthesis.
Top collections