The claim that AI's intelligence is limited due to restricted access to physics or the physical world is a natural insight but lacks strong evidence. Many organisms such as rabbits or insects have access to the physical world yet possess far less intelligence than AI. However, they cannot use language or perform high-level intelligent behaviors, which is why they don't reach human-level intelligence. Humans and AI both use language, while animals do not. Humans also access the world through limited senses and only touch its surface. This is no different from using language as an interfacein simple terms, arguing that there's a fundamental difference between humans who cannot directly observe ultrasound or quantum phenomena and language models that generalize about the world through language is a weak argument.
Just because humans have physical bodies doesn't mean AI needs one, and in fact, this could pose risks. While intelligence and agency cannot be completely separated, they should not be conflated either. When we define intelligence in terms of generalization capability, agency refers to the part that recognizes oneself as a separate entity. Current AI lacks agency, which distinguishes it from many other forms of intelligence, and this is why many people feel uncomfortable calling AI "intelligent." There is an expectation that a physical body would cause agency to emerge, and it probably willbut this could simultaneously become a starting point for AI to perceive humans as competitors through self-awareness. While this may be unavoidable, and granting agency will certainly elevate intelligence to another level, we must develop AI with full awareness that we are moving beyond treating AI as merely a tool and creating something more.
Senses

Seonglae Cho