Welcome, Guest. Please login or register.

Login with username, password and session length

 
Advanced search

1411953 Posts in 69435 Topics- by 58478 Members - Latest Member: Maiu

June 13, 2024, 06:15:16 AM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsCommunityTownhallCosyne Synthesis Engine
Pages: [1] 2
Print
Author Topic: Cosyne Synthesis Engine  (Read 25198 times)
muku
Level 10
*****


View Profile
« on: March 23, 2010, 05:13:01 PM »

Hey people,

I haven't been around these parts much lately for various reasons; maybe some people remember me anyway. However I've been working on something that may be of interest to those wanting to make a music or rhythm game, namely a lightweight realtime software synthesizer which can be integrated as a library. The thing is called Cosyne for the Cosyne Synthesis Engine (yeah, recursive acronyms rock). Instruments for the synth can be programmed in a special synth description language (documentation).

In this very first release, there's a demo application which lets you play with various synthesizers using your computer keyboard, and there are bindings to C and Python as well as some very basic example programs making use of those.

Download for Windows (check the readme):
http://bitbucket.org/eriatarka/cosyne/downloads/cosyne-bin-0.1.zip obsolete

Project homepage (source etc):
http://bitbucket.org/eriatarka/cosyne/wiki/Home

Some audio samples (this is only scratching the surface...):
http://www.box.net/shared/cflgzj8d26

I don't know if there's any interest for this kind of thing at all, so I'm just putting this out there to see what people think.

BTW, it's written in D.


=== UPDATE ===

The latest version of Cosyne can be obtained by going to the Downloads page and grabbing the newest binary package.
« Last Edit: April 06, 2010, 03:02:02 PM by muku » Logged
John Nesky
Level 10
*****


aka shaktool


View Profile WWW
« Reply #1 on: March 23, 2010, 05:19:23 PM »

Hmm! I am making something similar! But I haven't had time to work on my thing lately, Cry so congrats on releasing this. I'll make a note to check this out as soon as I have time.
Logged
tomka
Guest
« Reply #2 on: March 23, 2010, 06:42:31 PM »

hmm ill see if i can get a os x binary up and running
Logged
Alex Vostrov
Level 3
***


View Profile WWW
« Reply #3 on: March 23, 2010, 07:20:19 PM »

This looks interesting.  Maybe now my games will have more-than-rudimentary sound.
Logged
muku
Level 10
*****


View Profile
« Reply #4 on: March 24, 2010, 12:53:56 AM »

hmm ill see if i can get a os x binary up and running

That would be too cool... do you have a D environment set up? I used DMD 1 with Tango. Let me know if I can help. The main issue is I don't know if DMD can build shared libraries on OS X. To get started, try the CosPlay demo application which doesn't do dynamic linking, it just compiles the library in.

Also I should mention the Bitbucket hg repository is a few commits behind my actual working directory because I had some trouble pushing to it last night. I hope this will be fixed soon.
Logged
increpare
Guest
« Reply #5 on: March 27, 2010, 04:18:40 PM »

Hey muku, it's a very happy thing to see you pop back in here, even if just for a little.

I'll try this out either next time I'm back on windows or whenever the osx build gets built.
Logged
increpare
Guest
« Reply #6 on: March 30, 2010, 09:39:24 AM »

This is neat!  Light a lightweight version of supercollider.

Would be cool to link to as a library, but I won't be able to do that unless there's a mac port done.

A direct 'output to wave' function would be cool - with that, I can imagine myself making plenty use of this for messing around/improvising.
Logged
Trunks7j
TIGBaby
*


View Profile
« Reply #7 on: March 31, 2010, 12:32:56 PM »

This sounds fantastic man!  I can't wait to check this out.  I just recently finished working on a small music game project, and throughout the entire process I kept thinking to myself that there is really a ton of room for improvement for a real-time synthesis library for programming languages.
Logged
agj
Level 10
*****



View Profile WWW
« Reply #8 on: April 01, 2010, 03:55:51 PM »

Hey! Cool to see you again. I really like the concept of real-time sound synthesis, and games that use that. I hope cool things come from this.
Logged

muku
Level 10
*****


View Profile
« Reply #9 on: April 02, 2010, 06:16:13 AM »

0rel, that's really neat, just the kind of thing I imagined people could do with it! Glad to see some people are interested in this. I just came back from a week in Berlin, so had no time to work on it, but as soon as I have some free time I'll attend to some of the issues raised here.
Logged
JackieJay
Level 3
***



View Profile WWW
« Reply #10 on: April 02, 2010, 02:22:04 PM »

The name is just mind-blowing.  Epileptic
Logged

Tanner
Level 10
*****


MMPHM *GULP*


View Profile WWW
« Reply #11 on: April 02, 2010, 07:07:34 PM »

I don't think the name is the part that's intended to be mindblowing...
Logged

muku
Level 10
*****


View Profile
« Reply #12 on: April 03, 2010, 01:59:38 AM »

Ok, finally have some time. First of all, thanks for all the nice welcome-backs and such, it's appreciated Smiley

Some documentation would help though.
Yes, I should get to that ASAP.

Quote
* Realtime control over parameters of the sound, like. (ChangeParameter(instrument,...) or something similar.)
Yes, I've definitely thought about something like this, and it shouldn't be hard to implement. I guess I'll do this soon.

Quote
* More than 2 oscillators per instrument?
Would certainly be attractive. So far, I stuck to only 2 for several reasons. First of all, the plan was to keep synthesis cheap enough to run in the background of a game which does a fair amount of processing itself. Besides, the 2 oscillators plus filter layout is sort of a classic design of (soft)synths, so it's immediately familiar to anyone who has done some synth programming. (In fact, if you know the synth1 VST, I've copied a lot of its layout.) Finally, having a variable number of oscillators would probably necessitate a more modular design which could complicate things, both internally and for the user. On the other hand, just adding an optional third oscillator which gets mixed in with the others would be more or less a triviality to add, so that's definitely an option if people want it.

Quote
* Polyphonic instruments? (PlayChord(...) or something similar... Multiple PlayNote()s with the same start time? Didn't try that yet...)
Yep, multiple PlayNote()s do work. Just watch the volume so that you don't get clipping. I'm not sure yet how to deal with this whole volume business properly. Anyway, you can try it in the CosPlay sample application, just hit several keys at once.

Quote
And to get the library working in C++, I had to change those macros in the cosyne.h file like this:
Ah, thanks for this, forgot to try it in C++. Will update the header file accordingly.


Quote
Still I'm not so sure if I used the right method to play notes interactively, after pressing a button. I tried it like this:

Code:
Cosyne_PlayNote( c, Cosyne_GetCurrentTime( c ), 20000, INSTR_XYZ, 40, 0x40 );

That worked for me, although the timing is a bit off that way, because OpenAL (and other Audio APIs too) only asks for buffer updates in certain intervals, so that PlayNote() events are only as accurate as the buffer size allows... But it doesn't sound so bad, and adds some "analog qualities" to it Wink.

(Probably there's a way around this though, by adding an offset manually, but I didn't try that yet... Instead of a buffer based render function, a sample based render function could also help with this maybe, although that would make the performance go down...)

The way you tried was fine. (As a shorthand, you can use -1 instead of GetCurrentTime(), or maybe 0xFFFFFFFF if the compiler complains about signed-unsigned mismatch.)

Cosyne internally works with chunks of 64 samples instead of processing each sample individually to keep the CPU load manageable. If each chunk could be output immediately without delay, at a sample rate of 44.1kHz, this would correspond to a latency of 64/44100 = 1.45ms, which is basically realtime as far as humans are concerned. So this design shouldn't be a problem.

The problem is how to get the data to your soundcard as quickly as possible. As you mentioned, all audio APIs have some kind of cyclic buffer for audio data. The key point is the size of this buffer, which the API should let you choose upon initialization. If it's too large, you will get bad latency, if it's too small, audio will crackle and skip. This is to some degree hardware dependent, so I guess you should offer it as an option to your user, but in general I've found that 512 samples is an acceptable compromise.

Keep in mind that a game running at 60fps corresponds to having 735 samples per frame (at 44.1khz), so a 512 samples buffer should be just fine to keep perfect sync with the video.

Quote
By the way, I had an idea about MML (http://en.wikipedia.org/wiki/Music_Macro_Language), which found out about there a while ago... That's some crazy stuff for sure, and maybe the way it handles sequencing could be useful here too. I don't really understand how all this MML stuff works though, but it seems to be very powerful.. Maybe a separate library for sequencing only could be an other "module" of this toolkit, just an idea I had... Something like macros (for example for defining things like an arpeggiator, scales and rhythms), could be defined in seperate small scripts, which are generating note/parameter events, passing over to cosyne. Some parameters could be controlled interactively... It's a quite cloudy idea though... Don't now how it could be implemented myself Wink
Yes, that's very interesting stuff. I also did some stuff with procedural composition in Python a while back which I could now hook up to Cosyne (maybe you can still find the thread here). In general, I think these sorts of things should be implemented on top of Cosyne, not within it: I've decided to keep the engine closely focused on efficient generation of audio data. This is why the score functionality is so rudimentary and doesn't even offer something like a beats per minute setting, just raw sample counts. (Though maybe seconds would be nicer.)

Quote
I'd just love to edit instruments and possibly even "interactive sequences", while the final program keeps running... However, that's probably quite difficult with text based code files.
A very nice thought, and I think not even that hard to implement. Just check if the file has been modified (usually the OS will give you some kind of hook to listen for this, Windows does at least), and if so, reload it.

Quote
I also tried my hands on making a audio synth library (called 'executable sound'), you might remember it... It's still not fully finished though, but I'm working on it again soon. Mine isn't so "score" oriented, and it's harder to play notes and to define separate instruments the easily... And it can't be used in pure code/text, which makes things more difficult in practice.
Yes, I remember it, it was included with that 3D flying game of yours, right? (Occuplector or something like that, IIRC.) It was very cool, though it took a long while to render. Would definitely love to see any progress on that.

Quote
- This is one point that is really cool about your library. It's easy to make the most obvious thing: Play a note with an instrument! That's really useful.
Yes, ease of use (and binding to other languages) was a goal with this.
Logged
muku
Level 10
*****


View Profile
« Reply #13 on: April 03, 2010, 02:19:54 AM »

This is neat!  Light a lightweight version of supercollider.

Would be cool to link to as a library, but I won't be able to do that unless there's a mac port done.

Yeah, sorry about that. There's nothing really platform-specific about it, so a port shouldn't even be hard to do, I just don't have access to a Mac Sad

Quote
A direct 'output to wave' function would be cool - with that, I can imagine myself making plenty use of this for messing around/improvising.
What are you referring to? The library itself is totally backend-agnostic, it just generates samples, what you do with them is up to you. For convenience, a simple SDL backend is provided. The CosPlay app actually has a command line switch to record to a wav file. Or maybe you were referring to 0rel's thing.
Logged
muku
Level 10
*****


View Profile
« Reply #14 on: April 03, 2010, 02:21:57 AM »

The name is just mind-blowing.  Epileptic

Heh. It must have been a rare flash of inspiration Wink
Logged
increpare
Guest
« Reply #15 on: April 03, 2010, 04:28:00 AM »

The CosPlay app actually has a command line switch to record to a wav file. Or maybe you were referring to 0rel's thing.
Oh - I didn't know that!  Cool.
Logged
muku
Level 10
*****


View Profile
« Reply #16 on: April 05, 2010, 11:47:48 PM »

Thanks, some interesting points there. I'm at work now and will reply in more depth later. Just wanted to point out that I've done some documentation on the synth language. It's not complete yet, but should give people a start.

Tonight I'll probably do a new release with parameter changes implemented and some other smaller fixes.
Logged
muku
Level 10
*****


View Profile
« Reply #17 on: April 06, 2010, 02:56:46 PM »

Ok, Cosyne v0.2 is out. We now have user parameters as suggested by 0rel, and CosPlay can modulate them with the mouse if you pass it the -m switch. There are some other minor changes, mostly better error reporting.

Also, the first draft of the synth language docs is now more or less complete.
Logged
muku
Level 10
*****


View Profile
« Reply #18 on: April 08, 2010, 08:12:46 AM »

A small description of each interface function could also help a little maybe, to make it easier to get started, although all is quite self explanatory...

Yup, it's on my mental todo list Wink

Quote
The only thing I didn't get somehow was how the velocity parameter of Cosyne_PlayNote() really affects the instrument... Is it clamped to [0,255] or only 127?

Range 0..127, for compatibility with MIDI where it's the same. Right now this just linearly scales the gain of the instrument, but I might think of something more physical here...

Quote
Also I thought, maybe it would be good to add it the same way to the instrument description language as LFOs and parameters can be used in expressions, by using a symbol like 'vel' or something, so that one could do thing like 'shape({ 0.1+vel*0.8 })'? As float parameter in that case... Just an idea though.

That's already possible! It's not yet in the docs though. Use the variable f for frequency and v for velocity. f is in Hz, v should already be scaled to 0..1.
Logged
muku
Level 10
*****


View Profile
« Reply #19 on: April 10, 2010, 12:16:40 PM »

I sort of forgot/postponed to reply to this point.

Another solution that came to mind was to handle the interaction at a different rate than the audio API asks for updates, which can be unpredictable at times (OpenAL uses somewhat mysterious buffer queues for example). I often use a Move() function in my game code separated from Render() to handle all interactions/animations/state changes, which should be independent of the framerate. So, Cosyne_Render() could be called there at steady intervals, to render a small number of samples to a temporary soundbuffer, which would be cleared at the next audio API callback... That way the interaction (PlayNote/ChangeParam events) would affect the audio at a much finer resolution, and one wouldn't directly depend on the audio buffer size. Maybe I'll try that out this way... It isn't really a issue of your library though, more about how to use it.

It's already possible to schedule PlayNote/ChangeParam events at a quite fine resolution by just specifying a suitable time value when you call the function; no need to call Render() several times per sample.

The issue I see with your suggestion though is that, as far as the game logic is concerned, everything that happens during one frame happens conceptually at the same time. Just because you detect, say, one collision before another in your Move() function doesn't mean that they necessarily happened in this order. So I don't see how it would be very useful to do this. Or are you talking about a separate thread which is not in lockstep with the main game logic? I'm haven't used OpenAL at all, only just read up on how it does audio streaming, so I may be missing something.

I think the problems you saw with high latencies stemmed solely from the size of your buffers. How many buffers did you queue on your source, and how large were they? (The SDL way with a callback which is called when samples are needed definitely seems easier here...)


Quote
Hm, that volume issue was annoying me too by the way. I also don't have a perfect solution for this though...

I just implemented something which should help a bit with that. Maybe the following information is useful to you.

I realized a while back that the linear mapping of velocity to amplitude of the generated waveform isn't a very good choice. The ear doesn't work in a linear way, so if you halve the amplitude of a signal, it doesn't seem half as loud, but typically quite a bit louder than half.

Decibel, a logarithmic scale, were invented just for this kind of thing. It turns out that a gain of -10dB (which corresponds to a gain factor of about 0.31622) sounds about half as loud as the original. So I did the math and came up with the formula v ^ 1.661 for the amplitude scaling, where v is the velocity scaled to the range 0..1. With this formula, every time you halve the velocity, you will get a relative gain of -10dB and thus an approximate halving of the perceived loudness, which I think is a reasonable assumption to make.

Here's a simple example to compare the two methods: The notes in these audio samples increase/decrease in velocity by 16 from one to the next. The first sample is with the old linear velocity mapping, the second with the new nonlinear one. I think the second one sounds quite a bit more natural. The new method also gives you more headroom for mixing together multiple voices.

* linear velocity
* nonlinear velocity

So this will go out with the next update.
Logged
Pages: [1] 2
Print
Jump to:  

Theme orange-lt created by panic