flowerthief
|
 |
« Reply #460 on: July 08, 2022, 10:09:13 AM » |
|
Have you or anyone read that and could review/summarize the main points?
|
|
|
Logged
|
|
|
|
|
gimymblert
|
 |
« Reply #462 on: July 11, 2022, 01:20:52 PM » |
|
The Matrix Awakens: Creating the Vehicles and VFX | Tech Talk | State of Unreal 2022 The Matrix Awakens: Blurring Interactive and Cinematic Experiences |Tech Talk| State of Unreal 2022 https://www.youtube.com/watch?v=h_dJtk3BCygThe Matrix Awakens: Creating the Characters | Tech Talk | State of Unreal 2022
|
|
|
Logged
|
|
|
|
gimymblert
|
 |
« Reply #463 on: July 11, 2022, 09:57:34 PM » |
|
Puppeteering: Recording Animations in UE5 | Feature Highlight | State of Unreal 2022
Giving Personality to Procedural Animations using Math
|
|
|
Logged
|
|
|
|
gimymblert
|
 |
« Reply #464 on: July 15, 2022, 11:52:17 PM » |
|
Procedural Grass in 'Ghost of Tsushima'
|
|
|
Logged
|
|
|
|
gimymblert
|
 |
« Reply #465 on: July 19, 2022, 08:52:38 PM » |
|
Modbox OpenAI-GPT3 / Replica AI Scripting
|
|
|
Logged
|
|
|
|
gimymblert
|
 |
« Reply #466 on: July 20, 2022, 06:40:46 PM » |
|
|
|
|
Logged
|
|
|
|
|
gimymblert
|
 |
« Reply #468 on: August 29, 2022, 07:23:06 PM » |
|
Procedural Generation of Everyday Sounds
Layer-Based Procedural Generation for Infinite Worlds
|
|
|
Logged
|
|
|
|
gimymblert
|
 |
« Reply #469 on: August 29, 2022, 07:23:35 PM » |
|
The Trick I Used to Make Combat Fun! | Devlog
Daggerfall and Procedural Generation - Finding Beauty in the Mundane
|
|
« Last Edit: August 29, 2022, 07:41:23 PM by gimymblert »
|
Logged
|
|
|
|
|
|
gimymblert
|
 |
« Reply #472 on: September 25, 2022, 02:24:00 AM » |
|
How I made lipsyncing software in Godot
|
|
|
Logged
|
|
|
|
gimymblert
|
 |
« Reply #473 on: January 30, 2023, 08:39:55 AM » |
|
DeepMind’s ChatGPT-Like AI Writes Amazing Screenplays!
|
|
|
Logged
|
|
|
|
gimymblert
|
 |
« Reply #474 on: February 01, 2023, 01:47:33 AM » |
|
|
|
|
Logged
|
|
|
|
michaelplzno
|
 |
« Reply #475 on: February 06, 2023, 11:36:17 AM » |
|
That grass tut was great, I found it too.
|
|
|
Logged
|
|
|
|
Golds
Loves Juno
Level 10
Juno sucks
|
 |
« Reply #476 on: February 06, 2023, 02:29:25 PM » |
|
Generating Procedural Video Game Dialog With the OpenAI API and Real-Time Text-to-Speech NPC dialog lines are generated on the fly, different every time, facilitated by a custom prompt for each line in our custom Unity dialog editor. Instead of writing the dialog directly, you tell the AI what kind of possibility space to write in, and give it some background. It’s sort of like prepping a kid for an improvisational play. Speaking lines are sent to Google’s Cloud Text-to-Speech and processed live. Read more details about this technique and its implications in the full post here: https://doomlaser.com/openai-api-generated-video-game-dialog-with-real-time-text-to-speech/
|
|
|
Logged
|
|
|
|
michaelplzno
|
 |
« Reply #477 on: February 08, 2023, 08:41:15 AM » |
|
Very cool, I think that the most interesting bits will be for observations about the game state. Even then you would want some kind of data to tell the AI that the user has done something weird so that the AI can respond like a real brain. Interesting write up.
|
|
|
Logged
|
|
|
|
gimymblert
|
 |
« Reply #478 on: February 11, 2023, 09:23:53 AM » |
|
I hadn't share anything about LLM aside from story generation (offline generation, can be replicated using the services and manual prompts), because most of them would be online deployment, and we can't replicate it as offline locally on basic custom computer. Online LLM can be good to generate at dev time a massive amount of decent content though, like baking all the dialogues, all the quests, stories or any features these model can handle.
I think it can be done with smaller LLM that run locally, assuming a better "prompt whispering" that goes back and forth to a custom made inference engine. The idea is that small LLM are good enough to generate plausible sentence, and the difficulty with dialogue generation wasn't the generation, but the quality of the prose, generally stunted by heavy use of mad lib template and lack of contextual application of grammar. Using a small LLM could bridge the gap between a "robotically" hand made generation that sets the goals and human plausible sentence that adapt to the flow of conversation. Dialogue in game tend to be short answer, so the small context size of small LLM wouldn't be an issue, if we aren't expected the breadth of chat GPT, but focused context aware exchange. That is healthy limitation over flexing distracting out of scope possible discussion.
I think it's safe to assume we can generate ad hoc prompt command using mad lib that instruct the model to generate sentence that work within a context. We can also use such command to summarize the chatlog to keep context terse before generation. What's yet to be tested is parsing back sentences, from like the user input, using the LLM, to the hand made model data to be processed in deterministic way from the dev specification (and thus avoiding weird LLM chaos). That's all probably possible with fine tuning, which can be augmented artificially using bigger LLM, like they did with claude from anthropic (or chat gpt really).
But I don't have GOOD proof of concept to share right now. Aside from ai dungeon, there is some rpg games that use LLM as inference engine for local world generation, but they work more like chaotic stream of consciousness, rather than generating data into coherent premade structure.
In more general term, after digging a lot into LLM model, I think it might be possible to create a LLM without neural network and probably better optimize. The gist is that the attention mechanics is basically a parser, and the neural network as the memory being parsed (as embedding, but also as classes in the FF), alternating layer of parsing and memory unit is what give the model it's strength (parsing tree structure). Basically we can see the activation have a very clear structure with the middle layer activation being distinct, it's highly probable that the middle layer encode inference, and the starting and ending layers encode parsing and generating. Turns out that the memory part of layer (the neural networks) are probably equivalent of statistical sorting of word class, then each layer abstracting further into topic class, which mean the inference manipulate high level concept then generate back by composing the statistical probability of class down to word class. That's something we already loosely do using techniques such as Ngram and bag of words, but these techniques don't deal enough with polysemy and ordering, because, when the research moved onto that, derp learning revolution happen and they used it (starting with ELMO) to deal with these issues, forever locking them in the neural black box. The thing is neural network are good when we don't know the statistical distribution of meaningful artifacts as input, the training allow to auto sort as a black box, noise from meaning. But we know the statistical distribution as input and output of a LLM, so there is no need for a black box strategy. We know that the statistical envelop of a word or a corpus give back meaning (such as word class and topic class), for example the word "the" has a more random envelop than word like xylophone, and word that have a high frequency of the word "the" in their proximity envelop indicate their class (noun) quite accurately.
|
|
|
Logged
|
|
|
|
gimymblert
|
 |
« Reply #479 on: February 12, 2023, 06:12:28 PM » |
|
CRASHcourse to understand the basics of LLM, explained as simply as possibleWord embeddings and semantics Vectoring Words (Word Embeddings) - Computerphile https://jalammar.github.io/illustrated-word2vec/The Illustrated Word2vec https://www.youtube.com/watch?v=5MaWmXwxFNQ A Complete Overview of Word Embeddings https://www.youtube.com/watch?v=-QH8fRhqFHM The Narrated Transformer Language Model https://www.youtube.com/watch?v=sznZ78HquPc Attention - the beating heart of ChatGPT: Transformers & NLP 4 https://www.youtube.com/watch?v=dichIcUZfOw Visual Guide to Transformer Neural Networks - (Episode 1) Position Embeddings https://www.youtube.com/watch?v=mMa2PmYJlCo Visual Guide to Transformer Neural Networks - (Episode 2) Multi-Head & Self-Attention https://www.youtube.com/watch?v=TQQlZhbC5ps Transformer Neural Networks - EXPLAINED! (Attention is all you need) https://www.youtube.com/watch?v=gJ9kaJsE78k Visual Guide to Transformer Neural Networks - (Episode 3) Decoder’s Masked Attention https://www.youtube.com/watch?v=e9U0QAFbfLICosine Similarity, Clearly Explained!!! https://www.youtube.com/watch?v=AsNTP8Kwu80Recurrent Neural Networks (RNNs), Clearly Explained!!! https://www.youtube.com/watch?v=YCzL96nL7j0Long Short-Term Memory (LSTM), Clearly Explained https://jalammar.github.io/illustrated-gpt2/The Illustrated GPT-2 (Visualizing Transformer Language Models) https://jalammar.github.io/how-gpt3-works-visualizations-animations/How GPT3 Works - Visualizations and Animations https://www.youtube.com/watch?v=nv6oFDp6rNQ Hopfield Networks is All You Need (Paper Explained) https://www.youtube.com/watch?v=_NMQyOu2HTo ROME: Locating and Editing Factual Associations in GPT (Paper Explained & Author Interview) https://www.youtube.com/watch?v=qlB0TPBQ7YY Transformer Memory as a Differentiable Search Index (Machine Learning Research Paper Explained) https://www.gamedeveloper.com/programming/beyond-aiml-chatbots-102?print=1Beyond AIML: Chatbots 102 (old technique to contrast, but also could be useful to create scriptable LLM, by having a high level prompts management inspired by this, among other potential techniques).
|
|
« Last Edit: February 12, 2023, 10:31:31 PM by gimymblert »
|
Logged
|
|
|
|
|