Similar to the minimum limits based on the uncertainty principle's tradeoff or the ecosystem's carrying capacity limits due to biological reproduction, AI, despite its initial exponential growth, is likely to have intelligence limits since its implementation occurs in physical reality. Since both human neuron communication and semiconductor vector operations are affected by physical distance in terms of computation speed, if there are intelligence limits based on physical laws, it's likely to manifest as a tradeoff between the quantity of knowledge and quality of reasoning. While we don't know how far above current human-level intelligence this physically-imposed limit might be, unlike transformer scaling laws, the complexity of structure can also increase exponentially, so whether this limit is just above us or far away remains unknown.
Is there an appropriate way to predict this limit and calculate the intelligence that could be reached within this limit?
Physical laws may impose constraints on the development of intelligence