Gradient Descent through the Denoising loss using Energy Function
시냅스의 처음으로 돌아가기 전에 cortical layer를 지나는 과정에서 모종의 루프를 형성한다
네트워크의 레이어 수가 많아질수록 FF가 Backprop보다 메모리를 덜 사용합니다. 하지만 20개 이하의 레이어를 가진 thin network에서는 FF가 backprop보다 훨씬 더 많은 메모리를 사용
Automatic Differentiation 과 달리 기울기 계산은 안함
[딥러닝] The Forward-Forward Algorithm: Some Preliminary Investigations (FF)
Forward-Forward (FF) 리뷰
https://velog.io/@nochesita/딥러닝-The-Forward-Forward-Algorithm-Some-Preliminary-Investigations
![[딥러닝] The Forward-Forward Algorithm: Some Preliminary Investigations (FF)](https://www.notion.so/image/https%3A%2F%2Fvelog.velcdn.com%2Fimages%2Fnochesita%2Fpost%2Fa5854fae-195f-4f61-aaf4-9b04c6330386%2Fimage.png?table=block&id=4079e81f-9ca8-41d1-ae8c-1ed26dd996a2&cache=v2)
Welcome to JunYoung's blog | 딥러닝의 체계를 바꾼다! The Forward-Forward Algorithm 논문 리뷰 (1)
Forward-forward algorithm
https://junia3.github.io/blog/ffalgorithm

The Forward-Forward Algorithm: Some Preliminary Investigations
The aim of this paper is to introduce a new learning procedure for neural networks and to demonstrate that it works well enough on a few small problems to be worth further investigation. The...
https://arxiv.org/abs/2212.13345


Seonglae Cho