Where to start on iOS audio synth?

Question!

I know this is a very broad topic, but I've been floundering around with demos and my own tests and am not sure if I'm attacking the problem correctly. So any leads on where I should start would be appreciated.

The goal is to have the app generate some synthesized sounds, per the user's settings. (This isn't the only app function, I'm not recreating Korg here, but synth is part of it.) The user would set the typical synth settings like wave, reverb, etc. then would pick when the note would play, probably with a pitch and velocity modifier.

I've played around a bit with audio unit and RemoteIO, but only barely understand what I'm doing. Before I go TOO far down that rabbit hole, I'd like to know if I'm even in the right ballpark. I know audio synth is going to be low level, but I'm hoping that maybe there are some higher level libraries out there that I can use.

If you have any pointers on where to start, and which iOS technology I should be reading about more, please let me know.

Thanks!

EDIT: let me better summarize the questions.

Are there any synth libraries already built for iOS? (commercial or Open Source - I haven't found any with numerous searches, but maybe I'm missing it.)

Are there any higher level APIs that can help generate buffers easier?

Assuming that I can already generate buffers, is there a better / easier way to submit those buffers to the iOS audio device than the RemoteIO Audio Unit?



Answers

I have been using the audio output example from open frameworks and the stanford stk synthesis library to work on my iOS synth application.

By : Paul Wand


Basically it is going to be a toss up between audio queues and audio units. if you need to get close to real-time, for example if you need to process microphone input, audio units are your way to achieve minimum latency.

however, there is some limitation to how much processing you can do inside the render callback -- ie a chunk of data arrives on an ultrahigh priority system thread. And if you try to do too much in this thread, it will chugg the whole OS.

so you need to code smart inside this callback. there are few pitfalls, like using NSLog and accessing properties of another object that were declared without nonatomic (ie they will be implicitly creating locks).

this is the main reason Apple built a higher level framework (AQ) to take out this low level tricky business. AQ lets you receive process and spit out audio buffers on a thread where it doesn't matter if you cause latency.

However, you can get away with a lot of processing, especially if you're using accelerate framework to speed up your mathematical manipulations.

In fact, just go with audio units -- start with that link jonbro gave you. even though AQ is a higher-level framework, it is more headache to use, and RemoteIO audio unit is the right tool for this job.

By : P i


There are two parts to this: firstly you need to generate buffers of synthesised audio - this is pretty much platform-agnostic and you'll need a good understanding of audio synthesis to write this part. The second part is passing these buffers to an appropriate OS-specific API so that the sound actually gets played. Most APIs for audio playback support double buffering or even multiple buffers so that you can synthesise future buffers while playing the current buffer. As to which iOS API to use, that will probably depend on what kind of overall architecture you have for your app, but this is really the easy part. The synthesis part is where you'll need to do most of the work.

By : Paul R


This video can help you solving your question :)
By: admin