Algorithmic complexity, algorithmic entropy
Also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. Kolmogorov complexity is the length of the shortest computer program that can output a given solution. It is also defined as the computational resources needed to describe an object.
This measure of complexity is based on the length of the shortest program needed to describe any given string. It serves as one of the key indicators for measuring the complexity of finite-length data sequences. It defines the minimum length of a program that can reproduce the given data such as MDL does.
Interpretation
Kolmogorov Complexity is similar to a natural loss function for emergence, where lower complexity implementations appear first. As a result, other programs that emerge temporally later are not selected in Evolution due to their delayed appearance.
Kolmogorov complexity
In algorithmic information theory, the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.
https://en.wikipedia.org/wiki/Kolmogorov_complexity
FunSearch: Making new discoveries in mathematical sciences using Large Language Models
We introduce FunSearch, a method for searching for “functions” written in computer code, and find new solutions in mathematics and computer science. FunSearch works by pairing a pre-trained LLM,...
https://deepmind.google/discover/blog/funsearch-making-new-discoveries-in-mathematical-sciences-using-large-language-models/
Epiplexity
Traditional information theory (Shannon entropy, Kolmogorov complexity) is defined under the assumption of infinite computational resources, which leads to a mismatch with the notion of information observed in actual machine learning. Deterministic transformations do not increase information. Information is independent of data order.
However, real-world models (LLMs, NNs) have limited computation, and as a result, they create 'meaningful structures' during the learning process. The important information in machine learning is not Shannon information, but rather structural information that is computationally accessible to the model (epiplexity). Epiplexity is proposed as the concept of information that can actually be learned under computational constraints.

Seonglae Cho