Very interested to see where you end up with all this. Without knowing too much about how your system is set up, how feasible would it be to have multiple simulations running simultaneously? I could see the long simulation times being much less of an issue if you had the option of running a few in the background and checking in occasionally while you tweaked some settings and got ready to set up your next trainer.
Also I'm super into those brain visualizations! Really really great stuff.
This version actually does have multiple instances training in the background simultaneously, but it's still slow due to how high I have to crank physX settings to get stable movements (plus a general lack of optimization so far, but there's only so much that will help).
Sort of on that note, I've been thinking about it for awhile, and I've decided to put this project on hold for now and take some time to explore my options and go back to the drawing board a little bit.
In order to get the concept up to the level of quality/polish that I'd like, it's going to be a larger task than I had originally intended or expected, even after stripping it down to its leanest version. Furthermore, if I hunker down and build out that simplified but polished version of the project, I don't feel like the final product will be substantial or competitive. My potential solutions to its weaknesses all seem to be bandaids or extensions built on top, rather than true solutions, and I don't feel good about blindly pursuing it without taking a step back to consider all my options.
It was always my intention to start on a very small project to gain experience building a game from scratch to release, before attempting a more ambitious, multi-year endeavor. This project originally seemed like it could fit that description, but at this point it's clear that is no longer the case. Given that, I'm taking some time to work on a series of quick experiments in order to clear my head, practice my dev skills (especially GPU-based tasks), and for a change of pace. The goal is to experiment and fail quickly so I can try a variety of things in a shorter amount of time.
I've been interested in different pattern-generation techniques for awhile and there are a number of algorithms I haven't had a chance to explore, so my early experiments are in that sphere. I'm not sure if it's ok to use this thread to post these side projects but I figured it would be better than starting a new thread. Please let me know if this is frowned upon.
The first thing I messed around with was a GPU implementation of the diffusion-reaction algorithm. Here are some resources if anyone is interested in trying it out themselves: (the youtube video tutorial is not for GPU, I just loosely skimmed it and implemented a version in Unity using Compute Shaders, but I think it would be helpful for someone trying to code it)
http://www.karlsims.com/rd.htmlhttps://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion_systemLessons learned:
-Sort of cool
-Extremely sensitive to parameters
-Limited to a smallish subset of patterns
I just made a quick application where you can 'paint' density with the mouse cursor:
The next mini-project was to experiment with a variant of Convolutional Neural Networks on the GPU, inspired by some recent machine learning applications for image generation, particularly GAN's (Generative Adversarial Networks). They're basically a collection of many image filters (think of photoshop's gaussian blur or sharpen, those are convolutional filters)
https://en.wikipedia.org/wiki/Convolutional_neural_networkhttps://blog.openai.com/generative-models/Because I'm starting from scratch in Unity and don't have a giant image library to train on, I decided to try a different approach and basically evolve image filters that take random noise and transform it into a new image. Then I specify a target image and compare that reference image to the generated images by the similarity of their histograms.
Here's an example training run:
Lessons Learned:
-Pretty limited use, in its current state.
-Most basic filters produce a small subset of patterns much more easily than others, so many generated images look similar. It lacks the expressive power to generate any arbitrary image. This might be less of an issue if I created 'wider' networks (more parallel filters) which is possible but the way I implemented it was to use color channels as filters, so I only had 4 filters per layer.
-My comparison algorithm of comparing histograms is only a very coarse approximation of image similarity.
-The learning algorithm itself could use some improvements finding good candidate filters.
-Overall not a great success, but for a subset of patterns it's pretty decent, and quite fast (usually a minute or two per run)
Thank you everyone!