Random sparse sampling → interpolation
Nyquist frequency 보다 훨씬 적은 샘플링 만으로도 원 신호를 복원
Sparse하기만 하다면 신호를 완벽하게 복원가능 by Measurement Matrix with low mutual coherence
Exact compressed sensing is NP-hard, which the neural network certainly isn't doing.
Compressed sensing notion
DL Donoho
Compressed sensing
Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible.[1] The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals.[2][3] Compressed sensing has applications in, for example, MRI where the incoherence condition is typically satisfied.[4]
https://en.wikipedia.org/wiki/Compressed_sensing
Random의 재발견, 압축센싱(Compressed Sensing)
압축센싱은 영어로 Compressed Sensing 또는 Compressive Sensing 이라고도 부릅니다. 압축센싱이란 무엇일까요? - 위 움짤은 Nyquist Sampling 정리 를 표현하고 있습니다.
https://www.linkedin.com/pulse/random의-재발견-압축센싱compressed-sensing-gromit-park/?originalSubdomain=kr
neural network
Toy Models of Superposition
It would be very convenient if the individual neurons of artificial neural networks corresponded to cleanly interpretable features of the input. For example, in an “ideal” ImageNet classifier, each neuron would fire only in the presence of a specific visual feature, such as the color red, a left-facing curve, or a dog snout. Empirically, in models we have studied, some of the neurons do cleanly map to features. But it isn't always the case that features correspond so cleanly to neurons, especially in large language models where it actually seems rare for neurons to correspond to clean features. This brings up many questions. Why is it that neurons sometimes align with features and sometimes don't? Why do some models and tasks have many of these clean neurons, while they're vanishingly rare in others?
https://transformer-circuits.pub/2022/toy_model/index.html#related-compressed

Seonglae Cho