TIGSource Forums

Developer => Audio => Topic started by: JudahRoydes on September 25, 2011, 09:39:35 AM



Title: Introduction and scripting
Post by: JudahRoydes on September 25, 2011, 09:39:35 AM
 Hello humans, this is my first time posting here so any tips and feedback would be greatly appreciated. Now onto the questions. Have any of you had to do the scripting for your own music in a game? I am curious how common this is and if I should take the time to learn a scripting language. Also if you would like to check out some of my music you can do so at
http://soundcloud.com/judahroydes

Thanks for any info or comments.


Title: Re: Introduction and scripting
Post by: Matt Spencer on October 15, 2011, 07:51:49 AM
I'm not sure what your question means.  What do you mean by "scripting for music"?  Like changing loops or layers in a dynamic way?  I would say don't bother learning to script unless you plan to do a lot of scripting!! That stuff is easy for folks like me that code everyday.

You music is great! I bet you work with sound or music professionally.  You're a phenomenal guitarist, but I'd like to hear more electronic tracks like "pandas and robots". I love that catchy stuff!  I just started on soundcloud: http://soundcloud.com/slather (http://soundcloud.com/slather)


Title: Re: Introduction and scripting
Post by: JudahRoydes on October 22, 2011, 10:35:22 AM
Thanks for the reply Matt. Basically what was meant by scripting is coding inside the game engine that cues musical events to in game behaviors or environments. I actually find this aspect of game music very interesting as a composer. It forces you to think musically in a non linear way because the players behavior is unpredictable. I think there are a lot of really amazing things that can be done with this e.g. changing arrangements or adding instruments based on the players proximity to a light source. Actually the principals behind Cellular Automata are similar to some of the things I am thinking. Lets say you have a "musical Base State" for a given environment And that this musical base state is a basic sine wave. You could cue changes to the filters, effects and pitch to actions performed by the player.In this way you could have very interesting and complex musical activities. I am going to try out your Ninja game now. Thanks again for the advise and kind words.

Judah
www.soundcloud.com/judahroydes (http://www.soundcloud.com/judahroydes) 


Title: Re: Introduction and scripting
Post by: noah! on October 22, 2011, 06:38:25 PM
Heh, believe it or not, I'm actually a huge fan of this kind of thing, and have really been experimenting with it as of late. In fact, I even made something like that for a game I threw together (http://forums.tigsource.com/index.php?topic=21971.0) not too long ago. Granted, it's pretty primitive for what it is (the soundtrack is rhythmless ambient, and most of the triggers are proximity based), but I like to think it still counts as a dynamic soundtrack.

Honestly, though, from what I've seen in the indie world it seems like nobody really does too much with dynamic music. And I think part of the reason as to this is because audio programming scene is ridiculously underdeveloped compared to what people are doing with graphics. As such, the only audio libraries that are free for commercial use are usually those that come with whatever graphics lib you're using, and typically offer little more than file loading, playback, and a hook to load your own data into the buffer. So, to do anything beyond that, you need to roll your own code, which isn't usually something most game makers get excited about. Heck, to even do what I did in the game above I had to pretty much cut up and glue back together parts of SFML, all for the sake of seamless intro/loop audio looping. And that concept has existed since 1985!

To me it seems quite ridiculous. Here in the indie scene we have hundreds of programmers writing and releasing their own engines. They all have the same features, the same goals, the same paradigms, and sometimes even the same target platforms. Hundreds of developers writing and rewriting animation classes, implementing efficient tiling algorithms, porting A* to every language on the planet.

And not one is willing to write a good audio library.


Title: Re: Introduction and scripting
Post by: JudahRoydes on October 24, 2011, 05:55:50 PM
@Noah!

Thats unfortunate that most devs don't take the time to try this sort of thing out. they did a lot of this sort of thing in Wind Waker back in the day and I have not seen it used much since. Most often the musical changes are scenario based and then the whole piece is changed. It is effective but i think one could implement much more subtle changes to the musical background more frequently and achieve a greater emotional variance and connect more with the player. I played through A Square Alone. Its pretty interesting. I really liked
how it felt like i was in a zero G environment and that the walls where sticky. It would have been really cool if the tone created by the plasma stream thing changed somehow when you got one of the circle things. This all comes back to what i was asking in the first place i guess. If the devs and coders dont want to spend the time and resorces to implement something like this do you think it would be a good idea for the composers to learn to script there own stuff?   


Title: Re: Introduction and scripting
Post by: noah! on October 24, 2011, 07:09:50 PM
Hey, thanks for trying out my game! I'm glad that you found it at least somewhat interesting. :-)

Now back on topic, what do you suppose composers could do to further this goal? Audio programming is just as tricky as graphics programming, but unlike the latter audio programming hasn't really been elevated too much since we hit the days of redbook streaming.

One one hand, there's simple stuff like instrument swapping. That isn't hard to do, and modern computers can handle the burden of decompressing and mixing multiple audio streams rather easily. It also isn't too hard to apply simple FX, and there's more than a few audio libraries that support just that. But the interesting stuff, like realtime synthesis and whatnot, is not only painful to implement but also difficult to find libraries for. I mean, the most feature-complete realtime synth engine I know of is Cosyne (http://forums.tigsource.com/index.php?topic=11777.0), and that looks like it only implements up to the FM era of synthesis. But on the plus side, that was just one person's hobby project, so it's likely that a dedicated team could pull together something pretty neato.

I guess in conclusion, it really depends on what you want to do. I can understand why there aren't many dynamic frameworks out there, because each composer's demands are so varied that it'd be nearly impossible to make a one-size-fits-all solution. However, as long as you stick to simple stuff it really isn't too bad to rig up the framework that you want.

(hmm, maybe this is just the kick in the pants i need to actually get up and program that loop-chaining music engine idea i've been kicking around...)


Title: Re: Introduction and scripting
Post by: Triplefox on October 26, 2011, 01:35:52 AM
If you work in AS3, you owe it to yourself to check out SiON (https://sites.google.com/site/sioncenter/specifications). It does offer a lot of promise for scripting more dynamic music because the engine is relatively agnostic about how you want to generate your sound (e.g. a four-track song is just four sequences started simultaneously, sharing a few global parameters like bpm) and it contains pretty much all the necessary hooks for triggering or synchronizing events outside the sound engine.

In AS3, a high-quality VA would take up too much CPU time, but the SiON engine does offer a lot of possibilities for cheaper late-80's early-90's digital sounds with FM, filtering, sampling and effects, and especially chipsounds of the period. It's a good model for striking the balance between flexibility and real-time performance, allowing some common synthesis models to be mix-and-matched as desired.

However, you will have to learn MML to really use it, which is quite a large barrier to entry since most of the source documentation is poorly translated Japanese. I started a tracker at TIGJam for an alternative composition method with SiON - it's an undocumented demo (http://www.ludamix.com/apps/siontracker) right now, I hope to build it up some more soon and make it roughly as strong as Famitracker.

But I see MML as a benefit overall, not a burden. More generally, I see audio languages as the key building block for indies to progress further in this area, since they address a lot of the issues involved in going from synthesis to finished works in a relatively lightweight way. MML source is straightforward to port between implementations, and you can quickly cut-and-paste to keep only the relevant parts and replace implementation details like instrument data. A number of the songs posted on MMLTalks (http://mmltalks.appspot.com/) are ports from other MML implementations. This is not as easily done with something monolithic like MIDI - there are libraries to read in a SMF, but it takes some extra tooling to find and separate the "useful" data from implementation-specific stuff. Nobody is really happy with MIDI as a long-term storage method.

Similarly, lower-level audio languages like CSound or ChucK make it easier to prototype particular synthesis methods. This is a path that Cosyne is also taking; we could benefit greatly by evaluating the existing languages, wrapping them into library form and testing their integration in game engines. Once we have something working, instead of writing C++ to do synthesis you could write in the audio language instead, much in the same way that we've started to write shaders for graphics. Doing this would allow us to use more cut-and-paste borrowing, and thus progress faster. Combining the lower-level synthesis language with a higher-level MML-style language to do the composition, we'd have a more-or-less comprehensive system that could do simple things easily(loading and streaming common formats, looping music and sound triggers) and make complicated things(dynamic, evolving audio) within reach even given the time constraints of a game jam or similar environment.

Historically, a lot of focus has been placed on making slick GUI tools for audio specialists to do something like what they did in physical studios, recording to tape. I don't think that's a good model for the future of audio, however, and especially not for games. The transition of graphic design from the physical world to Photoshop, and then to CSS, holds some good examples. The benefits of textual formats are hard to ignore, and programming languages themselves have had no success escaping them...


Title: Re: Introduction and scripting
Post by: PaulForey on October 31, 2011, 09:28:37 PM
Hey all!

Really awesome to see stuff like this popping up more often in this forum. Synthesized and/or dynamic audio is my major interest in game audio, so I thought I'd share some of what I know.

If you're delivering your game to a suitable platform, then a tool/programming language I've used for lots of synthesis stuff is Pure Data (http://puredata.info/). The method I've used personally is simply running it as a separate process linked with OSC, as gleamed from this (http://www.obiwannabe.co.uk/tutorials/gamedev/OSC/oschooks.html) Andy Farnell article (I've bought his Pure Data "Designing Sound" book, it's awesome). However, there's also a project called libPD (http://gitorious.org/pdlib/pages/Libpd) which allows you to actually embed Pure Data within your code/engine. Frankly, though, that's probably beyond my technical skills.

For an example of using Pure Data for game sound, see my blog post here (http://runtime-audio.co.uk/blog/i-can-see-a-river-from-here) (ignore the first three paragraphs, I keep mistakenly thinking people want to read about my life). The post contains a link to the simple "audio game" (as opposed to "video game") that I wrote in Python (using Pygame). If you want to see the patches I used, either open up "noticing.pd" in ./pd, or run the game from the command line with the argument "-p" or "--showpd". Every bit of audio in this game is synthesized from scratch within Pure Data, and I think it's an example of not only the extra interest that can come from such complete audio control, but also the extra information that can be relayed to the player through such means. Sure this game does it in an arcade-y and somewhat contrived manner, but I think there's a lot to be said for games which use audio for a lot more than just decoration.

However, Pure Data is not a compositional tool. It's a graphical programming language with tons of DSP features. It doesn't have built in sequencers, and there's no really good way (yet) to easily sequence a large number of different events. You'll notice in my Python project that I've made a few basic sequencers (and have since come up with way better ways to do it than that), but it's a ball-ache to really compose with Pure Data. It's main strength is in the shaping of sounds, whether synthesis- or sample-based.



I think the really important thing when doing audio/compositional works in a more programmy-type way is the speed at which the artist can accurately experience what they have done, and then change it.

With Pure Data it's really easy to have the game running, putting you in the appropriate place, and then you can be changing the Pure Data code on-the-fly and getting instant feedback on what your changes are doing. It's pretty sweet.

From my experience with SiON, changing MML is actually pretty quick and easy, and depending on how you set it up, you probably wouldn't even have to recompile the actionscript (I'm a noob, so I DO have to recompile :P). Perhaps it's even possible to get the code to read a certain MML file repeatedly while the game is running so that you could have some sort of real-time response to compositional changes. Might be worth some investigation.

Interestingly, current methods of game audio production are incredibly slow to iterate with. You load up your audio program, make your tune, render it out, and then load up the game and listen. Of course, I live in a fairy-tale world where the composer/sound designer gets a chance to prototype their stuff within the game itself, and not just make the work without having even played the game.

While writing this, an idea that vastly appealed to me would be having Pure Data as a linked-in sound engine but have Pure Data be able to interpret and use an audio language such as MML. Then, you'd have control over the sounds/effects/dynamic mixing and whatever within the (fairly) easy-to-use/-read Pure Data environment, but have proper sequencing and compositional elements from MML (which would hopefully also be able to be edited on-the-fly).



Finally, in response to Noah!'s talk about audio programming not having gotten any further than redbook streaming, I'd probably say that that is a falsehood. There's lots of tremendously clever things that audio programmers are now capable of. The problem is that none of this work has been within the game's industry. Programs can analyse songs and generate (fairly) accurate metadata, websites can accept microphone input of you humming a popular tune and attempt to give you the correct song. I saw one site which claimed to be able to take any song and then process it in such ways that it changed the song's genres (it had a few probably pre-made examples, and it didn't really change the genre of the song so much as skewer the song right through the heart, but they tried!).

Video games haven't really been interested in any of this tech, and that's probably because there's not a lot of games that have needed it. What we need is more people making genuinely musical games (Guitar Hero, you're not a musical game), or games which use audio in a genuinely intrinsic fashion. There was a flash music game competition recently (in honour of Child of Eden) and of the entries I played (including the winners) I was pretty disappointed. None of them seemed to really have integral game audio. IIRC the winning entry was a physics-based puzzle game, but the sounds/music generated by the players actions were just decorative rewards, and not actually important to the game-play. Games are about mechanics and play, so truly integral game audio needs to function within those things, not as external rewards to them.



I also just want to mention that I think the tracker file format is a really cool way to store certain types of music, and they can be used in very cool ways by game code (you can alter the tempo, transpose everything, change which section of the tune you're playing etc.). The tracker file format is a great thing. However, I find trackers are, in general, a real ball-ache to use compositionally. What I want is a tracker with a modern, piano-roll interface, modern polyphony and a nice, easy way to adjust tracker effects. Let the program decide how many channels the file needs, I just want to play a darned C Major! Let the program decide how many ticks per second and all that stuff, I just want to play the notes in and have them play back at the correct time, with as much timing nuance as possible. I feel a program like that could be a first step to utilising tracker files as the great tool they are without having to write "tracker music" (though obviously there would still be some big limitations).



tl;dr Pure Data is an interesting game audio tool, but not great for actual composition. Pure Data + MML would be sweet. Integral game audio means game audio which MATTERS by effecting the game-play itself.

I apologize for this wall of text. And to the OP: Learn Python or Lua or something. It's straight-forward to get into, pretty fun, and it can't hurt. Just knowing some of the very fundamental ideas behind programming can go a long way, IMO.