In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of the shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic complexity, algorithmic entropy, or program-size complexity.

https://en.wikipedia.org/wiki/Kolmogorov_complexitye.g. a string of infinite 0s can be described very concisely. A string of infinite

*truly random* bits can, by definition, not be generated with less bits.

> In that case a reference, the idea of the lossless pgc noise, is that that library "potentially" hold all human knowledge + an inordinate of irrelevant noise, so all you need to know is the reference to the start to that knowledge, which as small as the starting seed. It's basically an address to a virtual memory, aka "the library".

The point is: how small that seed can actually be depends on the entropy of the information that you are trying to retrieve.

procedural generation works because it actually is

*very low entropy* in information theory terms.

In her example she first compresses Hamlet through conventional means, because the idea is that finding the random string of bits of the compressed Hamlet is easier than the random string of bits of the unpacked one. But that compressed bit string already has quite a high information entropy, actually. You can't compress it that much further