JobLeonard
|
|
« Reply #320 on: September 19, 2018, 10:44:55 AM » |
|
@party trick: the problem would be *finding* the Shakespeare stuff
|
|
|
Logged
|
|
|
|
gimymblert
|
|
« Reply #321 on: September 20, 2018, 07:16:04 AM » |
|
Yes that's the point, how hard it would be, what's the procedure, etc ...
|
|
|
Logged
|
|
|
|
gimymblert
|
|
« Reply #322 on: September 20, 2018, 07:20:08 AM » |
|
Okay I got ahead and looked at what they said, since I forgot: Party Tricks They say that an infinite number of monkeys at an infinite number of typewriters will produce all the works of Shakespeare. If we use a k-dimensionally distributed generator with a large enough k, we know that all the works of Shakespeare are guaranteed to be produced eventually, it will just take an unimaginable amount of time to find them.
But that's assuming we search blindly.
We can instead contrive the state of the generator so that it will find them, thus creating a party trick. We set up the generator so that it will find a Shakespeare play, and then jump backwards from that point, creating a generator that produces millions or billions of random-looking numbers, and then, suddenly, a Shakespeare play, and then back to random numbers.
The extended generators provided by the C++ implementation of PCG family include a set function that can be used to create party trick generators, including ones that really do output Shakespeare http://www.pcg-random.org/useful-features.html#party-tricksProcedure: random difficulty: random cost: random Well why I expect less than that for a "random" generator.
|
|
|
Logged
|
|
|
|
JobLeonard
|
|
« Reply #323 on: September 20, 2018, 08:58:12 AM » |
|
The trick is that once Shakespeare is found, you can use the ability of the generator to jump ahead to instantly get to Shakespeare from the seed.
|
|
|
Logged
|
|
|
|
gimymblert
|
|
« Reply #324 on: September 20, 2018, 09:03:37 AM » |
|
That's what I said? It's random! I mean I put the procedure to document it and not left others hanging out if that was true. Here is teh code documentation: It may seem strange to find some English text in the output of a random number generator, but that's what pcg32_k64's 64-dimensional equidistribution guarantees you—that every possible sequence of 64 32-bit values (i.e., 256 bytes) will occur in its staggering 22112 period, in fact they each occur 264 times.
The only difficulty is finding such sequences in all the noise.
The set operation means that you don't have to find such a state, you can create it. http://www.pcg-random.org/using-pcg-cpp.html#state-type-pcg32-k64-pcg32-k64
|
|
« Last Edit: September 20, 2018, 09:15:22 AM by gimymblert »
|
Logged
|
|
|
|
gimymblert
|
|
« Reply #325 on: September 21, 2018, 05:55:21 AM » |
|
|
|
|
Logged
|
|
|
|
JobLeonard
|
|
« Reply #326 on: September 21, 2018, 07:45:13 AM » |
|
That's what I said? It's random! No, it's pseudo-random But I think I get you now, after reading again: - the internal state of the generator is as big as Hamlet, so any non-trivial seed like all zeros is useless - the amount of jumps you need to make is likely a larger number than what fits into that state as well (Honestly though, if you're into this stuff you should read up on information theory, especially Shannon entropy and Kolmogorov Complexity. There are mathematically hard limits to lossless compression.
|
|
|
Logged
|
|
|
|
gimymblert
|
|
« Reply #327 on: September 21, 2018, 08:02:21 AM » |
|
I know about these two bits, that is the data transmitted depend on the shared dictionary of the emitter and receiver, basically it's about how "new" informations can be transmitted.
In the shannon model, an example often use is weather transmission, with up to 4 weather states, it will fail if you want to pass a fifth state that the receiver as no model for, so what you have done is essentially pass a reference to the correct state, not the state itself. In order to pass more state that is in the receiver, you will need to pass a description, which is where the Kolmogorov stuff start to apply more, but even that that's assuming you have the primitive to express the state to the receiver and that's the same problem of the receiver and emitter having shared references to communicate.
In that case a reference, the idea of the lossless pgc noise, is that that library "potentially" hold all human knowledge + an inordinate of irrelevant noise, so all you need to know is the reference to the start to that knowledge, which as small as the starting seed. It's basically an address to a virtual memory, aka "the library".
It's like recommending a book, I will pass you the reference to the books and you will decompress the data by accessing the book. In fact that's the idea behind deterministic progen, you have the fixed description and the permutation that define progen entity identity.
By the way the pgc code can only encode up to 256 octets in a sequence by abusing that "reference guessing" trick.
Edit: BTW 256 octets / 4 octet (32bit seed) = 64:1 fixed compressions ratio, assuming it actually work, there is still a limit since we can't create big enough state for arbitrary size. It will compress your 4k demo to 64o but that's without the size of the pgc random implementation size.
|
|
« Last Edit: September 21, 2018, 08:41:07 AM by gimymblert »
|
Logged
|
|
|
|
JobLeonard
|
|
« Reply #328 on: September 21, 2018, 03:43:42 PM » |
|
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of the shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic complexity, algorithmic entropy, or program-size complexity. https://en.wikipedia.org/wiki/Kolmogorov_complexitye.g. a string of infinite 0s can be described very concisely. A string of infinite truly random bits can, by definition, not be generated with less bits. > In that case a reference, the idea of the lossless pgc noise, is that that library "potentially" hold all human knowledge + an inordinate of irrelevant noise, so all you need to know is the reference to the start to that knowledge, which as small as the starting seed. It's basically an address to a virtual memory, aka "the library". The point is: how small that seed can actually be depends on the entropy of the information that you are trying to retrieve. procedural generation works because it actually is very low entropy in information theory terms. In her example she first compresses Hamlet through conventional means, because the idea is that finding the random string of bits of the compressed Hamlet is easier than the random string of bits of the unpacked one. But that compressed bit string already has quite a high information entropy, actually. You can't compress it that much further
|
|
|
Logged
|
|
|
|
gimymblert
|
|
« Reply #329 on: September 21, 2018, 07:28:53 PM » |
|
That's kinda the point? I'm not sure where you are going at? I was just saying that any string of character can be found using a single seed, in fact you "only" have 4billions seeds which is the limit of the number of limited 256 octets sequence you can use with a 32bits, and I don't know if there is collisions (ie different seed produce the same sequence). It's not magical and I was saying it's still abide to those rules you mention, the seed is the minimal length description, even though some seed can further be compressed, it will be at the expense of writing extra code, which is an increase of the data size. But it's also convenient because then you just have a reference and a single algorithm, so it generalized a lot. This won't have the entire wikipedia down to a single bit, that's not the point, precisely what you are objecting about? I feel like we are saying the same thing in circle more progen stuff anyway. https://ukwrite.wordpress.com/2014/10/22/write-your-own-spooky-story-with-random-plot-generators/
|
|
|
Logged
|
|
|
|
JobLeonard
|
|
« Reply #330 on: September 23, 2018, 11:54:34 AM » |
|
,I'm getting at is this:
> To see if I can actually encode/compress stuff into a seed, how much and at what cost.
You honestly won't find much compression that much better than the existing methods. Procgen uses handcrafted patterns, even when applying noise.
|
|
|
Logged
|
|
|
|
JobLeonard
|
|
« Reply #331 on: September 24, 2018, 07:01:44 AM » |
|
IK Rig like Prototype Demo Published on 20 Sep 2018
Showing off a few weeks worth of work trying to implement something similar to IK-Rigs from Ubisoft's GDC demo two year back. So far, the concept works by having a set of universal data that can be animated, then pushed through IK solvers then apply the results back to an armature bones.
https://github.com/sketchpunk/FunWithWebGL2Good WebGL2 teaching channel in general!
|
|
|
Logged
|
|
|
|
gimymblert
|
|
« Reply #332 on: November 13, 2018, 12:06:09 PM » |
|
CitizenCon 2948 - Panel: Biome on the Range
Promethean AI Announcement Trailer
Promethean AI is everything I predicted
|
|
|
Logged
|
|
|
|
|
gimymblert
|
|
« Reply #334 on: November 14, 2018, 05:13:16 PM » |
|
Good stuff
|
|
|
Logged
|
|
|
|
Schrompf
|
|
« Reply #335 on: November 14, 2018, 11:12:19 PM » |
|
I liked the CitizenCon video. Nothing spectaculary new, but a nice breakdown of generative techniques and nice images to illustrate.
|
|
|
Logged
|
Snake World, multiplayer worm eats stuff and grows DevLog
|
|
|
gimymblert
|
|
« Reply #336 on: November 17, 2018, 08:27:59 PM » |
|
|
|
« Last Edit: November 17, 2018, 11:37:17 PM by gimymblert »
|
Logged
|
|
|
|
|
gimymblert
|
|
« Reply #338 on: December 01, 2018, 08:37:35 PM » |
|
|
|
|
Logged
|
|
|
|
gimymblert
|
|
« Reply #339 on: December 18, 2018, 07:04:57 AM » |
|
|
|
|
Logged
|
|
|
|
|