Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411430 Posts in 69363 Topics- by 58416 Members - Latest Member: JamesAGreen

April 19, 2024, 10:41:04 PM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsDeveloperTechnical (Moderator: ThemsAllTook)Procedural resource dump
Pages: 1 ... 26 27 [28] 29 30
Print
Author Topic: Procedural resource dump  (Read 138816 times)
nicknicknicknick
Level 0
***


idkfa


View Profile WWW
« Reply #540 on: March 19, 2023, 10:59:05 AM »

it's been a slice, lol

Logged

8-Bit Personality, 16-Bit Looks
http://nicknicknicknick.net
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #541 on: March 19, 2023, 11:10:15 AM »

Anyway I created this thread, I know what goes in and why Tongue
Logged

[email protected]
Guest
« Reply #542 on: March 19, 2023, 11:13:09 AM »

I just wanted to download some code assets. But no, that's not what this thread is for. That's not what this forum is for. I guess I'll go back to pestering Valve for extra visibility boosts as you assholes circlejerk each other about vague conceptual ideas of design. It's been real.
Logged
gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #543 on: March 19, 2023, 01:44:06 PM »

https://cocktailpeanut.github.io/dalai/#/
Run LLaMA and Alpaca on your computer.

https://www.youtube.com/watch?v=4MGCQOAxgv4?
 Theory of Mind Breakthrough: AI Consciousness & Disagreements at OpenAI [GPT 4 Tested]
« Last Edit: March 19, 2023, 01:49:53 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #544 on: March 19, 2023, 01:53:06 PM »

It's foolish to try to use a model without understanding what it does.

It's like wanting code without knowing how a program works.

Also it's new, there is a lot of catch up to do, which is why I shared potent informations on the subject.

Even if you want to make a silly game with generative NPC talk.
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #545 on: March 21, 2023, 08:54:13 AM »

https://jaykmody.com/blog/gpt-from-scratch/
GPT in 60 Lines of NumPy
https://github.com/jaymody/picoGPT
picoGPT is an unnecessarily tiny and minimal implementation of GPT-2 in plain NumPy. The entire forward pass code is 40 lines of code.

https://ondrejcertik.com/blog/2023/03/fastgpt-faster-than-pytorch-in-300-lines-of-fortran/
FASTGPT: FASTER THAN PYTORCH IN 300 LINES OF FORTRAN
https://github.com/certik/fastGPT
The progression of GPT-2 codes from the original to "minimal", "nano" and "pico":

Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #546 on: March 22, 2023, 09:51:07 AM »




Sparse Priming Representations - the secret ingredient to scalable AGI memories
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #547 on: March 22, 2023, 01:27:13 PM »

https://abuqader.substack.com/p/releasing-alpaca-30b
Releasing Alpaca-30B
A guide on how I fine-tuned Alpaca 30B and how to use it
https://huggingface.co/baseten/alpaca-30b


https://github.com/antimatter15/alpaca.cpp
Run a fast ChatGPT-like model locally on your device.
https://www.reddit.com/r/StableDiffusion/comments/11y6qs7/free_opensource_30_billion_parameters_minichatgpt/
Quote
We're already there. I've seen many memes and incorrect information about the capabilities of LLaMA models and Alpaca, and it arguably has already reached that point. It's not GPT-2 level, and I can have extremely coherent conversations with a character even with the untuned 13B model, almost like talking to a real person. It can function like an assistant, generate stories, and more.
Quote
The projects themselves are not bad. What they're doing for the community is amazing. They just should have chosen better default parameters for people new to all of this.
Quote
What settings do you recommend?

Short answer: temp 0.72, rep pen 1.1, top_k 0, and top_p 0.73 for creative; temp 0.7, rep pen 1.1764, top_k 40, and top_p 0.1 for precise

Better answer: Use text-generation-webui if you can instead of alpaca.cpp. This guide explains everything from beginning to end on setting it up, and my parameters recommendation comes from there. The web UI comes with several different presets for experimentation and makes it very easy to figure out what parameters you'd prefer for best results.
https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/

https://www.reddit.com/r/singularity/comments/11u848r/i_am_alpaca_7b_ask_me_anything/?sort=new
I am Alpaca 7B - Ask Me Anything

https://www.reddit.com/r/StableDiffusion/comments/11xuki3/i_made_a_free_website_to_train_your_own/
I made a free website to train your own Dreambooth models and play with ControlNET on those models
https://trainengine.ai/




NEW Procedural Animation In Godot 4.0



Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #548 on: March 23, 2023, 06:33:00 AM »




Generative grammars as a form of procedural content generation




An introduction to procedural lock and key dungeon generation
« Last Edit: March 23, 2023, 06:38:20 AM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #549 on: March 23, 2023, 07:16:40 AM »




The Secret Behind Unexplored: Cyclic Dungeon Generation | AI and Games




An introduction to graph rewriting for procedural content generation
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #550 on: March 23, 2023, 11:45:59 AM »




Dungeon Generation in Gun Game




Procedural Animation in 5 Minutes | devlog 1



https://www.youtube.com/watch?v=fnFj3dOKcIQ
Brian Bucklew - Dungeon Generation via Wave Function Collapse

https://www.youtube.com/watch?v=k2rgzZ2WXKo
Best Practices for Procedural Narrative Generation

https://www.youtube.com/watch?v=WutTZ4FCHA8&t=312s
Kristen Yu: Video Game Quest Theory for Improved Procedural Content Generation



https://www.youtube.com/watch?v=6luOrhlzvrc
The Four Fundamental Quests


Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #551 on: March 23, 2023, 02:23:24 PM »




Jack Schlesinger: Your Puzzle Is A Secret Dungeon
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #552 on: March 24, 2023, 08:48:32 AM »




 Procedural Building Generation with Grammars
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #553 on: March 25, 2023, 07:17:50 AM »




I Built a Procedural Mission Generator Prototype for No Man’s Sky
https://www.deep-space-travel-network.com/missiongenerator/
https://perchance.org/tutorial
https://perchance.org/nmsmissionsaccordian#edit
« Last Edit: March 25, 2023, 07:26:19 AM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #554 on: March 25, 2023, 09:06:09 AM »

https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/
https://rentry.org/llama-tard-v2
LLaMA Int8 4bit ChatBot Guide v2 Animated Llama
Want to fit the most model in the amount of VRAM you have, if that's a little or a lot? Look no further.

https://llamahub.ai/
Connect custom data sources to your LLM with one or more of these loaders (via LlamaIndex or LangChain)

Quote
satireplusplus
VRAM is the limiting factor to run these things though, not tensor cores

currentscurrents
Right. And even once you have enough VRAM, memory bandwidth limits the speed more than tensor core bandwidth.
They could pack more tensor cores in there if they wanted to, they just wouldn't be able to fill them with data fast enough.

pointer_to_null
This is definitely true. Theoretically you can page stuff in/out of VRAM to run larger models, but you won't be getting much benefit over CPU compute with all that thrashing.

Enturbulated
You are absolutely correct. text-gen-webui offers "streaming" via paging models in and out of VRAM. Using this your CPU no longer gets bogged down with running the model, but you don't see much improvement in generation speed as the GPU is churning with loading and unloading model data from main RAM all the time. It can still be an improvement worth some effort, but it's far less drastic of an improvement than when the entire model fits in VRAM.
https://github.com/oobabooga/text-generation-webui

shafall
To give some more specifics, most of the time its not the CPU that copies the data on modern systems, it is the PCI DMA chip (that may be on the same die though). CPU just sends address ranges to DMA

wojtek15
Hey, recently I was thinking if Apple Silicon Macs may be best thing for AI in the future. Most powerful Mac Studio has 128Gb of Uniform RAM which can be used by CPU, GPU or Neural Engine. If only memory size is considered, even A100, let alone any consumer oriented model, can't match. With this amount of memory you could run GPT3 Davinci size model in 4bit mode.

currentscurrents
I'm hoping that non-Vonn-Neumann chips will scale up in the next few years. There's some you can buy today but they're small: https://www.syntiant.com/ndp200
Quote
NDP200 is designed natively run deep neural networks (DNN) on a variety of architectures, such as CNN, RNN, and fully connected networks, and it performs vision processing with highly accurate inference at under 1mW.
Quote
Up to 896k neural parameters in 8bit mode, 1.6M parameters in 4bit mode, and 7M+ In 1bit mode
An arduino idles at about 10mw, for comparison.
The idea is that if you're not shuffling the entire network weights across the memory bus every inference cycle, you save ludicrous amounts of time and energy. Someday, we'll use this kind of tech to run LLMs on our phones.

C0demunkee
maybe consider Tesla P40s
24gb, lots of CUDA cores, $150 each

Civil_Collection7267
Untuned 30B LLaMA, you're saying? It's excellent and adept at storywriting, chatting, and so on, and it can output faster than ChatGPT at 4-bit precision. While I'm not into this myself, I understand that there is a very large RP community at subs like CharacterAI and Pygmalion, and the 30B model is genuinely great for feeling like talking to a real person. I'm using it with text-generation-webui and custom parameters and not the llama.cpp implementation.
For assistant tasks, I've been using either the ChatLLaMA 13B LoRA or the Alpaca 7B LoRA, both of which are very good as well. ChatLLaMA, for instance, was able to answer a reasoning question correctly that GPT-3.5 failed, but it has drawbacks in other areas.
The limitations so far are that none of these models can answer programming questions competently yet, and a finetune for that will be needed. They also have the tendency to hallucinate frequently unless parameters are made more restrictive.

Civil_Collection7267
alpaca.cpp runs on the CPU. If you want to use LLaMA with GPU, you'll need to set it up with something like text-generation-webui.
At 8-bit precision, 7B requires 10GB VRAM, 13B requires 20GB, 30B requires 40GB, and 65B requires 80GB.
At 4-bit precision, 7B requires 6GB VRAM, 13B requires 10GB, 30B requires 20GB, and 65B requires 40GB.
With some tweaks, it's possible to run 7B LLaMA with 4GB VRAM.

Civil_Collection7267
Quote
how does it compare to chat?
13B and 30B LLaMA are both amazing for this, and you can even upload character presets to make it into whatever you want. Characters can remember things you say in a conversation and genuinely feel lifelike. I promise I'm not exaggerating when I say it's that good. I don't really use llama.cpp and alpaca.cpp so I can't comment much on the experience there.
I haven't tried anything like RPG character builds, but 30B LLaMA can write very good stories, so I imagine it could do that too depending on what you're looking for.
Quote
Can it write code
This is the one major area where the models fail currently. None of the untuned models can write code in any competent capacity. However, finetuning should be able to improve this significantly. I don't think it'll be long until someone steps up to do this.
https://www.reddit.com/r/StableDiffusion/comments/11y6qs7/comment/jd841h9/?utm_source=share&utm_medium=web2x&context=3

nDeconstructed
IDK. I guess I'm just...

(•_•)

( •_•)>⌐■-■

(⌐■_■)

... that good.

https://github.com/machado2/alpaca-docker
This is a Dockerfile and docker-compose configuration to run the Alpaca language model in a container.

https://github.com/qwopqwop200/GPTQ-for-LLaMa
4 bits quantization of LLaMa using GPTQ

https://medium.com/@berkanzorlubas/creating-personalized-animated-memes-using-fine-tuned-text-to-image-models-37a45de4c7c3
Creating personalized animated memes using fine-tuned text-to-image models

https://www.reddit.com/r/StableDiffusion/comments/120l1nm/i_changed_style_of_the_video_using_stable/
I changed style of the video using Stable Diffusion!

« Last Edit: March 25, 2023, 04:25:14 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #555 on: March 25, 2023, 04:59:17 PM »

https://huggingface.co/spaces/PAIR/Text2Video-Zero
Quote
We built Text2Video-Zero, a first zero-shot text-to-video synthesis diffusion framework, that enables low cost yet high-quality and consistent video generation with only pre-trained text-to-image diffusion models without any training on videos or optimization! Text2Video-Zero also naturally supports cool extension works of pre-trained text-to-image models such as Instruct Pix2Pix, ControlNet and DreamBooth, and based on which we present Video Instruct Pix2Pix, Pose Conditional, Edge Conditional and, Edge Conditional and DreamBooth Specialized applications. We hope our Text2Video-Zero will further democratize AI and empower the creativity of everyone by unleashing the zero-shot video generation and editing capacity of the amazing text-to-image models and encourage future research!

https://twitter.com/_akhaliq/status/1639434628166459395
modelscope text2video generation

https://nmkd.itch.io/flowframes
Quote
Flowframes is a simple but powerful app that utilizes advanced AI frameworks to interpolate videos in order to increase their framerate in the most natural looking way possible.


https://www.youtube.com/watch?v=0RGYwnfxTRo?
ControlNet Deep Dive - OpenPose In-Depth Tutorial

https://www.youtube.com/watch?v=vUTV85D51yk?
ComfyUI - Node Based Stable Diffusion UI - THIS IS THE FUTURE!!!!!

https://www.youtube.com/watch?v=A6dQPMy_tHY?
ULTRA SHARP Upscale! - Don't miss this Method!!! / A1111 - NEW Model


https://www.youtube.com/watch?v=Nl43zR5dVuM?
Why Is EVERYONE Using This Model?! - Rev Animated for Stable Diffusion / A1111



https://github.com/Kav-K/GPT3Discord
A robust, all-in-one GPT3 interface for Discord. ChatGPT-style conversations, image generation, AI-moderation, custom indexes/knowledgebase, youtube summarizer, and more!
https://github.com/daveshap?tab=repositories

https://www.reddit.com/r/singularity/comments/1210cl0/reflexionbased_gpt4_significantly_outperforms/
Reflexion-based GPT-4 significantly outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)
https://nanothoughts.substack.com/p/reflecting-on-reflexion
https://i.redd.it/vapi6i60vtpa1.png

https://www.reddit.com/r/MachineLearning/comments/1027geh/r_massive_language_models_can_be_accurately/
[R] Massive Language Models Can Be Accurately Pruned in One-Shot
https://en.wikipedia.org/wiki/Perplexity
https://news.ycombinator.com/item?id=35238338&ref=upstract.com

http://chat.petals.ml/
This chatbot runs BLOOMZ-176B over the Petals network.
https://petals.ml/

https://simonwillison.net/2023/Mar/17/beat-chatgpt-in-a-browser/
Could you train a ChatGPT-beating model for $85,000 and run it in a browser?

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
The AI Revolution: The Road to Superintelligence
« Last Edit: March 25, 2023, 07:45:45 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #556 on: April 01, 2023, 06:25:27 PM »

https://github.com/wawawario2/long_term_memory
Text Generation Web UI with Long-Term Memory
Quote
Welcome to the experimental repository for the Text Generation Web UI with a long-term memory (LTM) extension. The goal of the LTM extension is to enable the chatbot to "remember" conversations long-term. Please note that this is an early-stage experimental project, and perfect results should not be expected. This project has been tested on Ubuntu LTS 22.04. Other people have tested it successfully on Windows. Compatibility with macOS is unknown.
A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.

https://huggingface.co/chansung/alpaca-lora-30b
https://huggingface.co/maderix/llama-65b-4bit/tree/main
llama-65b-4bit

https://vicuna.lmsys.org/
Quote
We introduce Vicuna-13B, an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. Preliminary evaluation using GPT-4 as a judge shows Vicuna-13B achieves more than 90%* quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90%* of cases. The cost of training Vicuna-13B is around $300. The training and serving code, along with an online demo, are publicly available for non-commercial use.

https://laion.ai/blog/oig-dataset/
Introducing the OIG Dataset: A Massive Open Source Instruction Dataset with ~43M Instructions!
Quote
Safety and Moderation

Along with OIG, Ontocord.ai is also releasing OIG-moderation, a small safety instruction dataset. OIG-moderation is intended to train a moderation model to predict labels for various moderation categories such as "needs intervention", “hate”, "sexual content", etc. Ontocord will also release in future versions, multilingual versions of the dataset, and include potential responses that could contain a reason why a chatbot might not respond to the answer. It aims to address issues including privacy eliciting prompts, and depression responses, along with prompts eliciting sexual content and aggressive behavior from users.

OIG-moderation includes data from (a) public datasets such as anthropic-redteam and anthropic-harmless, prosocial, and contributed datasets from community members (b) augmented toxic data such as civil comments data converted into instructions, (c) anthropic-redteam data augmented with prosocial tags (d) data provided by the LAION community that might include NSFW prompts, and (e) synthetic depression data generated from a public depression bag of words dataset using one of LAION’s volunteer’s grammar fixing models.

A model trained on the OIG-moderation dataset can be used to provide safety labels, and the bot providers can choose to then block responses from their chatbots based on these labels. If a bot provider's policy for example permits sexual content, but prohibits PII eliciting text, they can hopefully do so with the output of a model trained on this OIG-moderation.
« Last Edit: April 01, 2023, 06:53:33 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #557 on: April 02, 2023, 05:14:12 PM »

https://www.youtube.com/watch?v=JPP6DGL5ykQ?
 Corridor Crew Workflow For Consistent Stable Diffusion Videos

https://www.youtube.com/watch?v=wMmqCMwuM2Q?
 Diffusion and Score-Based Generative Models
« Last Edit: April 02, 2023, 07:29:29 PM by gimymblert » Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #558 on: April 03, 2023, 04:01:25 PM »




 Proceduralism for Games? Short answer is YES. | Christos Stavridis | GDC HIVE 2023
Logged

gimymblert
Level 10
*****


The archivest master, leader of all documents


View Profile
« Reply #559 on: April 04, 2023, 10:23:54 PM »

https://github.com/nomic-ai/gpt4all
gpt4all: a chatbot trained on a massive collection of clean assistant data including code, stories and dialogue

https://github.com/EleutherAI/pythia
Pythia: Interpreting Autoregressive Transformers Across Time and Scale

https://www.cerebras.net/blog/cerebras-gpt-a-family-of-open-compute-efficient-large-language-models/
Cerebras-GPT: A Family of Open, Compute-efficient, Large Language Models


https://github.com/Lightning-AI/lit-llama
Quote
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
https://github.com/karpathy/nanoGPT
« Last Edit: April 06, 2023, 07:59:29 PM by gimymblert » Logged

Pages: 1 ... 26 27 [28] 29 30
Print
Jump to:  

Theme orange-lt created by panic