Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1412064 Posts in 69447 Topics- by 58483 Members - Latest Member: Exye

June 24, 2024, 09:14:07 AM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)Procedural resource dump
Pages: 1 ... 27 28 [29] 30
Print
Author Topic: Procedural resource dump  (Read 142721 times)
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #560 on: April 07, 2023, 01:45:34 AM »

Okay we have crossed the tippig point:




 This Tech Enables AGI | How to Create Your Own Autonomous GPT-4 Agents with Auto-GPT
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #561 on: April 08, 2023, 06:29:07 AM »

Why code when computer can do it for you?




 Cataclysm - The Last Python Module?
Quote
Overview of `cataclysm` for Python - extreme approach to AI code generation in the modern era of generative AI. Leverages code inspection to do just-in-time implementation of the best function to fulfill your needs.
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #562 on: April 15, 2023, 01:53:55 AM »

https://hackaday.com/2023/04/10/the-hello-world-of-gpt/
THE HELLO WORLD OF GPT?
Logged

JobLeonard
Level 10
*****



View Profile
« Reply #563 on: April 17, 2023, 10:18:01 AM »

Gimy, could you please start reading the works of Emily M. Bender and Timnit Gebru and put your feet back on the ground. You're doing the "overinfatuated with the latest tech hype" thing again.
Logged
Schrompf
Level 10
*****

C++ professional, game dev sparetime


View Profile WWW
« Reply #564 on: April 18, 2023, 10:49:48 AM »

Didn't dare to write it, but yes, some applicable real-world-procgen inbetween the AI craze would be nice.
Logged

Snake World, multiplayer worm eats stuff and grows DevLog
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #565 on: April 19, 2023, 08:37:47 AM »

Gimy, could you please start reading the works of Emily M. Bender and Timnit Gebru and put your feet back on the ground. You're doing the "overinfatuated with the latest tech hype" thing again.
I don't know if that's any arguments or projection in this. The tech is useful right now, it's fully integrated in my work, and it has boosted my prpductivity, like right now. If I have use for a tech isn't being over infatuated? Or are you skeptical because it's new and you are annoyed by new things instead of trying to understand it?

I was on the bandwagon way before GPT was a thing, it's the reason I start that thread, so not about "new tech", it's stuff I have been thinking and researching for a long time, from the stanford parser going through emily short's work on narrtive design in IF and chatbot like Suzzette and Rozette using chatscript and former AIML stuff.

Heck even in generative art I was following the works of AARON by Harold Cohen and EMILY HOWELL by David Cope. That's like super old news.

So when you say latest hype, i'm like: "so you out yourself as a newb?"  Noir

Like I literally started the batch by warning about the actual ability of the tech, people try to imagine it's that oracle ai or database query, that's problematic.

I shared article along 2 main consideration:
- Stuff you can do and verify on your own LOCALLY, without relying on big tech gatekeeping
- Stuff that give you clues and inform you about the internal working of the stuff.

For example:
- People keep saying it's predicting the next word, that's slightly inaccurate, the slightly is quite important. I focused on sharing informations on embedding because of that "slightly", embedding are black box that summarize the semantic classes of a token (as representation of words), the system predict the next embedding, that is the whole "set of semantic class", that's a lot more data than just the next word, which mean there is a "velocity of semantic" that is maintain through the generation. The attention mecanism is similar to the standford parser (or parser in general) in which we associate a token to other, EXCEPT that th" "slightly" add a twist, through the multi headed attention, it's attending to whole set of semantic class and filtering them through the "multi head". Then it recursively apply that system to result and storing stuff in the FF layer just after the multi head. Once you understand that, you get a more precise idea of why the model is working. It cannot store data strongly like a typical chat bot or sql query, because the system rely on generalizing pattern, so any memory is overfitting, and hallucination are over generalizing, the overall output looks a certain way that seems like it's doing thing similar to regular chatbot, but we must refrain from that. The system usefulness is in the semantic manipulation, not in the database consulting it's used as of right now.

- Same for that frakking metaphor about image generation, people repeat "it learn to remove noise" but they don't actually know what that mean and why the noise is added in the first place. Noise is a way to jitter the learning of semantic class to combat the sparseness of multidimensional space, which allow the system to learn a gradient toward the classes, which allow to navigate the latent space toward desirable state.

You won't find these explanations online.

The issues is that I discovered neural network on my own, as a kid, when I was trying to make smarter NPC by enhancing affinity system, then online in ai forum, people told me that was neural networks.

I want to make a course that demystify neural network away from the statistical lingo. For example latent space become a lot more understandable if you have a toy model that map onto known concepts, especially latent space. For example a color to text, or text to color model, because RGB is basically an embedding space we all understand, we can represent word as position in a the rgb semantic space that is only 3 understood dimensions (6 simili obfuscated if we use the HEX), and mapping HEXCODE to RGB to other space like HSL is similar to what's doing on with neural net.

I'm not enamoured with a new tech, it's literally what I have been pursuing since I was a literal kid.


Also do you realize, everything I share, I read it through it, read alternative, and selected to put it here for one reason or another, often I post multiple version of the same thing, because people can choose the one they understood better, and also having the same thing told multiple way help understanding better.


//---


Quote
Emily M. Bender and Timnit Gebru
So I looked into that, and I'm like, there could not be more tone death post to anything I shared.

Why would assume thing things I'm not saying and just further your cognitive bias?

I'm literally mad, that's borderline illiteracy. Just because the ambiant discourse is going place, like anthromorphism, doesn't mean the opposite reduction is smarter.

For example: I remarked, along many other, that the tech is able of "theory of the mind", that's a leap to say it's like human, just that it has this property doesn't imply anything like that, it's like saying plane can't fly because they don't flap their wings. AS A DESIGNER, "theory of mind imply" compression of data and context sensitive understanding, that's cool in itself, and it probably don't map on how "theory of the mind" map to human behavior. It doesn't have to flap to fly.

And that the tech has inherent danger, just like any other tech since the invention of fire and incendie, (danger that are based off who create and use the tech, therefore who are left deciding what's the identity of the tech is. Also equivalence doesn't mean similarity, that's toddler based understanding.

I mean that's as shallow as condemning gas stove on the basis it's a bomb in everyone's house, while there is a more nuance discussion about other threat it poses vs the advantage it afford, against other way of "boiling water to make food".

You would thought that as a black person, I'm aware of the circle of bias and threat in technology, and that would be an incitative for me to actually study it in order to not be taken aback when it's misused by having BASIC literacy of its functioning? Run on sentence for effect, i'm exasperated about the discourse about the tech. If it's useful, use it, it is, so I use it.

//----------------

I mean it's like when relational database was created and people were like "we will reach human level intelligence with that" and people was disappointed calling the tech crap, then it became known as SQL and bring whole industry up, even though it didn't bring the stupid lofty goal of "gay communist utopia" star level of society, it was a useful tech worth studying, and guess what you still has to learn about it when you go in computer, its used everywhere, more so in the internet era.

Basically there is 3 metrics to know if a tech will even moderately change society, at some level:
- Does it modify relation to labor (the 3 type, mechanical, cognitive, social).
- Does it modify how we handle logistics (organization, planning and distribution of things)
- Does it modify how we communicate

LLM, even at their current level allow to upset how we are organized along these metrics. Even if they aren't able to be like human brain or human capable (for example they aren't capable to form complex planning that engage in discontinuous thought), an army of cheap junior moderated a single manager can do a lot along these 3 axis, it's not like human don't make error, which is why we use "mixture of expert" structure, aka scientific consensus.

I remember a time in TIGS, on internet in general, in which we had substantiated debate about stuff that didn't involve "no u" and quoting influencer rando on twitter because it cajole my fear. Like what is the argument? is it an opinion or it based on observation of facts and counter facts? where is the literacy, and the historical reflections?

I'm just mad, as you can tell by the number of successive post. That arguments was so dumb and so not on the level I'm used to. I'm out the playground, I'm not a toddler.

/rant

//----------------



BTW if you want to roast me, here are a bunch of more or less pursuit I have:

- Fully automated indoor farming, by recyling human waste (to harvest NPK nutrients) to replace a fridge in appartement

- TRASH (Tiny Robot, Analog Spring Humanoid), based on removing digital computation to reproduce the ATRIAS papers and taking inspiration from SLIP (spring Loaded Inverted Pendulum).

- YAGNNNI (You Aren't Going to Need Neural Networks for Intelligence), rolling back to word embedding to better do cluster of interpretatable statiscally derived semantic from corpus, avoiding the ELMO jump to machine learning into black box embedding. Currently trying to figure out how to derive polysemy and homonymy of tokens, and probably experimenting with current embedding to derive subspace of the embedding for interpretability, does the embeddings support polysemmy or the LLM FF layers decode it back from context? What gain we had if we decoded polysemy and homonymy before the LLM learning step. Also given that layers of GPT LLM are sparse, is there values in using recurrence of weight? And what about XOR at the neuron level? What are other metaphor to explain the vector space at the token level to make intuitive?

- NUCLEATION theory in the process of intelligence through simulation, in which I explained how AI diverge from human due to how the Nucleation is done in the emergence of the ability, and what it does tell us about human, that's where I draw the line through ACTUAL arguments, if we admit (you probably don't) that there is equivalence with the PART of the brain (equivalence NOT similarity, it's a thought exercise), then AI can't be think like human due to the nucleation process.

Basically we have to assume that part of the brain (probably the neocortex) as a structure that does "equivalent" processing, that ie predict the next "representation", in human it's fed by internal stimuli (emotion, goal seeking, body sensation, etc ...) and external stimuli (the senses data), which give it a singular coherence (unique perspective), and these representation are on a feedback loop.

Meanwhile LLM are feed representation (the training) of representation (the text) created by these stimuli (text written by human to express their experiences), that's a second order, maybe a third order representation of human mind, since these comes from multiple sources of experience, the nucleation is not singular it represent "multiple state at different time from multiple sources", it's not experiential and has no sense of feedback, it also has no volition, no emotional reaction, and has no feedback stimuli. So while it looks like it act human, it's a simulacra (which is impressive by itself) that's emergent of the process. The big property of this unstructured collections of experience is the ability to instance agents along a learned state to behave in a certain way through (partial and inaccurate) simulation. That's a very useful property that's the main revolution, not the fact that's it's potentially or not like the human mind.

That's also the main danger (those state are unstable and unordered, phase shift can occur, like we already observed when the behavior drift to manic state), and it's something nobody has literacy to discuss as far I know. Though discussion about the potential agency of AI is a close but crude approximation (see the Mesa Optimizer problem by Robert Miles).

//---

You know stuff nobody cares about, because anthropomorphisme go BRRRRRRR, people who say the tech is not human are the first to anthropomorphized, because they push human quality they then denied, it's the "it doesn't flap it can't fly", that's not different than the basic AGI lover.
« Last Edit: April 23, 2023, 06:49:28 AM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #566 on: April 19, 2023, 10:51:51 AM »


https://echoesofsomewhere.com/
Echoes of Somewhere
An experimental freeware 2.5D point and click adventure game with AI assisted graphics.
An anthology series with different world and characters in each episode.
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #567 on: April 22, 2023, 06:10:56 PM »




 Advanced Road Generation | Erwin Heyms | Games Workshop




 Complex Roads in Houdini - Part 1: Solving Intersections
« Last Edit: April 22, 2023, 07:09:46 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #568 on: April 23, 2023, 06:44:38 AM »




 Generative Graphics Workflow for Games with Jussi Kemppainen
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #569 on: April 25, 2023, 12:33:26 PM »




 Why Halo Infinite's Bots Play More Like Humans | AI and Games #71
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #570 on: April 27, 2023, 11:18:53 AM »




 Houston, we have a planet: The spherical terrain of Kerbal Space Program 2 | Unity at GDC 2023




 Creating realistic landscapes in Unreal Engine with Houdini | GDC 2023
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #571 on: April 27, 2023, 11:53:37 AM »




 AI Actors for Game Worlds
Logged

JobLeonard
Level 10
*****



View Profile
« Reply #572 on: April 27, 2023, 12:34:41 PM »

Yeah, I am roasting you. Not for your knowledge about AI, but for the fact that I see you dump a million AI tech links but zero, I mean zero discussion about what is happening out there right now in terms of AI hype, centralization, big VC money, the way what little we have left of an internet commons is being actively destroyed by tech bros that wish to commodify and monetize everything.

And given that you're the guy who used to share tons of social justice articles I am sincerely disappointed that you of all people are completely silent on that front.

I'm holding you to a higher standard and you have yourself to blame for raising that bar, calling me a "n00b" won't help there because if you're right about that you only strengthen that argument.
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #573 on: April 28, 2023, 06:19:07 AM »

It doesn't matter because that thread is for catching up so YOU can do understand and use the tech for you. Everything else is for other thread to discuss. It's the pcg RESOURCES thread, not what's happening in AI.

Also you would thought that getting the knowledge and seizing the mean of production would be a thing you appreciate along the sjw angle.

BUT NO you choose to be shallow butthurt and didn't elevate the discussion in any way because you literally know nothing about what's going on. BECAUSE most of the model and link shared are EXPLICITLY about not leaving the tech in the hand of financial happy techbro and move away from the gate keeping, if you understand the tech, you essentially understand it's nothing complicated and that's it's essentially brute force:

- The pile is an open source training set created by one guy in his bedroom to offer free for all resources to create similar quality AI.

- BLOOM GPT and various big model are open source and useable by anyone, people set up an open source distributed computing process called PETALS that allow anyone to run these big model without having to rely on paid server that gatekeep tech and pay capitalist.

- LLama is  corporate made by meta and train using the pile, BUT was leaked to the public and the paper massively hints how possible it was to run competant LLM on ANYONE's hardware.

- Alpaca, showed that we can train and expend capabilities of LLM, on ANYONE's machine.

- Using the leaked LLama, many people set out to optimize the inference showing it can run even on weak machine like a Rpi (able it at slow speed), finishing breaking the limit of server size memory and expensive GPU like the A100.

- LoRa techniques show you can train and augment LLM without having to spend a lot of time on computing, ie a (very decent) desktop computer can train in a few hours instead of weeks. Making the tech more accessible.

- Many model and training set are created, such as open assistant, to combat the gatekeeping. Many model were explicitly released on apache 2.0 license to permit anyone to make stuff with it.

What does that mean? well the hype, , centralization and big vc money, that's stoopid game for stoopid people, the tech is useful and good enough, and already in the hand of THE PEOPLE. My actions here just further that initiative. Knowledge will help you know what's up and see through stoopid, sharing the tech help you not depend on gatekeeping, and the vc thing becomes irrelevant, because they are burning money on liar to gain exclusive on non exclusive stuff... Once it is demonstrated the tech is largely accessible, how it works and what is the limit, the circus become largely irrelevant.

So it does mean you have low standard, I was way ahead of you on raising the bar, you are noob, sorry, but that's true, you got subjugated by the play of stoopid financial power, and lost the plot. Obviously, you have been childish and jumped guns, didn't read anything I posted, and just reacted senselessly, so in order to to save your face, I'll be more childish than you and proceed to do infinite dabs on your face.   Facepalm Hand Shake Right


EDIT:
Let me test how subjugated you are:
- tell me why GPT4 demonstration of multi modal abilities (can see image) is a lie.


NOTE:

Subjugation:
When people are so used to a system, that they only think inside of it, and don't realize you can assess things differently without relying on the existence and the framing of the system, which mean things only exist within the system or in relation to it. That make it so they are unable to think about different system and mode of existance. While they criticize the system, they are unable to think how things help them exist outside that system, as the system define their existence. So when new things happen, they only think about how it extend the system, and not how it extend themselves and build alternatives to the system.

Case in point, The AI art debate:
- it rely on the existence of entertainment corporation to create "jobs",
- they need "jobs" to pay rents and live,
- AI can replace them at "jobs",
- therefore rendering them obsolete and not able to live

That's pure syllogism.

- Entertainment corporation exist because it aggregate labors and audience,
- owner of the corporation benefit of the aggregation by taking a small margin on the giant operating cost,
- their interest is to convince you that they are important,
- but really they are middle men between labor and audience, coasting on the aggregation

so what's the issue?

- Art used to be expensive, and single person couldn't do a lot
- Corporation was born out of a way to make expensive stuff possible by aggregation
- ai art allow to make big stuff less expensive with less people
- which mean you don't need aggregation of large team, the cost go down
- which mean smaller team, even single individual are competitive against corporation
- which mean you can get to the audience without the corporation
- which mean your job exist without the corporation as they are mere middle men

It's not new, the situation was similar at the birth of indie dev:

- Making games was expensive and risky, therefore you needed a publisher
- internet made distributing games easier and cheaper, less reliant on publisher money
- internet made access to free engine easier
- internet made the accepted threshold of production value go down (don't need realistic graphics, pixel art is good enough)
- internet made access to knowledge faster and easier, more approachable
- internet allowed new model to emerge
- now one person can make a game in his bedroom.

Now it didn't made corporation go away, it allowed smaller one to appear where they couldn't (indie publisher). The market is saturated but it's not "locked" as it used to be. Indie game gain acceptance alongside big production, and big production are increasingly suffering (there is less and less of them, and they take longer and longer). Indie as a fallback allows creative to not be captive of the corporate and had increased rise in arms.

Obviously, if we just sit up and cry because we are subjugated, we are going to leave them the key to define the system, but their power is mostly to convince us there is no alternatives, and our power is to build alternatives, and that takes experiment and knowledge.
« Last Edit: April 28, 2023, 06:55:53 AM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #574 on: May 05, 2023, 07:29:58 AM »




 Procedural Generation with Wave Function Collapse and Model Synthesis | Unity Devlog
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #575 on: May 07, 2023, 11:11:15 AM »




 NEW MPT-7B-StoryWriter CRUSHES GPT-4! INSANE 65K+ Tokens Limit!

commercial license and free model

Quote
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #576 on: June 03, 2023, 07:47:43 AM »




 How Unexplored 2 Generates Entire Fantasy Worlds from Scratch | Artifacts #1




 Reinventing procedural generation - Errant Worlds plugins - Daniel Krakowiak || Errant Photon ||

Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #577 on: June 03, 2023, 07:48:08 AM »




 Terrain landscaping using city shapes - Sebastian Werema || #techland ||




 Behavior Places breathing life into world of #DyingLight2 || Jacek Szymonek || Techland
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #578 on: August 14, 2023, 09:13:44 PM »




 NVIDIA’s New AI Is Gaming With Style!
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #579 on: August 14, 2023, 09:14:11 PM »




 GPT-4 Outperforms RL by Studying and Reasoning...
Logged

Pages: 1 ... 27 28 [29] 30
Print
Jump to:  

Theme orange-lt created by panic