Manifold hypothesis

Created
Created
2023 May 24 8:27
Editor
Creator
Creator
Seonglae ChoSeonglae Cho
Edited
Edited
2026 Jan 12 12:29

Manifold Assumption

assume that data is concentrated around a low-dimensional manifold
manifolded dimension distance is more important than the original dimensions
 
 
 
 
Manifold hypothesis
In theoretical computer science and the study of machine learning, the manifold hypothesis is the hypothesis that many high-dimensional data sets that occur in the real world actually lie along low-dimensional latent manifolds inside that high-dimensional space.[1][2][3] As a consequence of the manifold hypothesis, many data sets that appear to initially require many variables to describe, can actually be described by a comparatively small number of variables, likened to the local coordinate system of the underlying manifold. It is suggested that this principle underpins the effectiveness of machine learning algorithms in describing high-dimensional data sets by considering a few common features.
What Are Neural Networks Even Doing? (Manifold Hypothesis)
In this video, I try to crack open the black box we call a #neuralnetwork 💪💪 The animations were made using #Manim Community Edition, and here is the code I wrote to generate them: https://github.com/igreat/videos/tree/main/manifold-hypothesis Timestamps: 0:00 - recap 0:49 - visualizing neural networks 2d 3:05 - linear transformations 3:27 - nonlinear transformations 4:30 - affine transformations 5:14 - back to 2d neural networks 7:04 - why use more neurons per layer? 10:10 - manifold hypothesis 11:36 - visualizing handwritten digit separation 12:56 - conclusion
What Are Neural Networks Even Doing? (Manifold Hypothesis)
 
 

Recommendations