You can now use your own mp3 files and edit the track in game!
Export your parameters and share!
Download Builds Here (v0.5, also Oculus Rift builds):
Feedback is appreciated!
Status: Active Development
Dev Team: Solo
Platforms: PC, Mac
Framework: Unity 3DSteam Greenlight Concept page: http://steamcommunity.com/sharedfiles/filedetails/?id=142393387 General Devlog: http://sagzorz.comSource Code: https://github.com/SagarPatel/FrequencyDomainIndieDB Page: http://www.indiedb.com/games/frequency-domainSome Background
Back in May of 2012 I participated in TOJam 7 with 2 friends (sound designers) John Axon and Marius Masalar. For those who don’t know, TOJam is a 48 hour game jam that takes place yearly in Toronto. I HIGHLY recommend going to a TOJam, they’re incredibly well organized and it’s always a great experience.
Going into May 2012′s TOJam, we knew that we wanted to make a game that used audio as a core part of the gameplay. Considering that we were 2 audio guys and 1 programmer (me), it seemed like the best used of our skills, although we weren’t sure yet exactly what to do with the game.
I arrived in Toronto the night before the jam started and at this point in time I had dabbled with Unity a little, but wasn’t very comfortable with it. So I decided to poke around the documentation and eventually stumbled upon the “AudioSource.GetSpectrumData” function. This function essentially performed Fast Fourier Transform (FFT) in real time on the audio/music being played and returned the frequency domain data at a given resolution.
To keep it short, the FFT data was used to to move platforms (placed in rows) up and down and the FFT data was also used to control the color of said platforms. We ended up creating a first person puzzle-platformer, where the player could manipulate music, which in turn would affect the platforms directly and allow the player to traverse the environment. We gave the player control over what musical tracks to toggle on or off as well as volume control, all of which would directly contribute to the movement of the platforms, allowing the player to reach new areas of the environment if used correctly.
There is much to be said about the TOJam game and the various design and technical dilemmas we encountered and why I halted further development of it, but I’ll save that for the dedicated TOJam 2012 project page.Onto Frequency Domain
The Frequency Domain concept came from my desire to streamline the design of the TOJam 2012 game to it’s core and to remove the awkwardness of the first person platforming experience. I planned to improve on the main tech from the jam game, with the focus being on creating a environment that would be continuously shaped and colored in real time via the FFT data.
Unlike the TOJam game, I wanted to have a gap-less continuous surface for the player to travel on, as to avoid the awkwardness of first person platforming. By this I specifically mean moments when the player is unsure if they’ll manage the jump and has to resort to regularly looking down to know where the edge of the platform is. That specific issue occurred rather often in the TOJam game and understandably so, the mechanics of it made somewhat of a “precision platformer” (more on that and other issues on the TOJam 2012 page).
The obvious solution was to use a large mesh where the height of the vertices would be attached to the FFT data: no gaps, no awkward platforming moments. In the TOJam game, all the platforms reacted in real time to the music, and since the swings of amplitude in FFT data are often violent, having a surface dictated solely by real time data (raw, unprocessed) makes for a rather difficult, if not unplayable, experience. That’s when the idea of having a “2D field mesh” defined by 2 axes came to me: one axis would be the frequency values and the other would represent time. The frequency patter (i.e. FFT data) would start at one end of the field and would make its way down the mesh, 1 row at a time every update loop.
Here’s some of the first footage of the mesh in action:http://www.youtube.com/embed/E-PZMsENml4
The video above is with a single track audio. It’s maxing out Unity’s FFT function to fetch 8192 points of data, of which only the first 128 are used in this case. So what we’re seeing in the video is are the low frequency sounds, mainly beat and bass guitar in this case of this song.
The end goal is that the player can explore/navigate this terrain as its being generated, think first person Sonic. Having multi-track audio data will allow me to create a mesh for each track – players gets to explore and switch between the different tracks on the fly and have the meshes be surrounding them in a ring like formation.
....much more to come (just got to write it up)...