AGI Pessimism

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2025 Feb 16 22:40
Editor
Edited
Edited
2025 Feb 16 22:41
Refs
Refs
LLM training appears smarter compared to human brain network structure limitations because it has coherent communication, is controllable, highly repeatable, fast, and can process large amounts of data simultaneously. Currently, it's a low-intelligence consciousness that knows a lot of information due to repetitive training. While various tricks improve model reasoning capabilities, the only viable path seems to be scaling, but it's too energy inefficient compared to the human brain, so we expect better architectures to emerge.
The history of AI development can be viewed as a process of reverse engineering intelligence. An interesting observation is that this development follows the reverse order of evolution. The neocortex, represented by the frontal lobe (
Neocortex
), is the last external structure that evolved in the brain in nature, and we have implemented this conscious process that we understand best through LLMs. Now we've added multimodal capabilities like vision and developed the occipital lobe further. The brain consists of various parts including the
Allocortex
which corresponds to unconscious processes or memory functions (
Hippocampus
,
Amygdala
). Therefore, algorithms that can seamlessly integrate these components will be crucial in future research.
AI Scaling
is not Everything. In a same sense, techniques like reasoning incentive and memory scaffolding might help, but there's no guarantee they will solve core deficits.
 
 
 
 
 
 
 
 
 

Recommendations