iOS 7 introduced the speech synthesizer API AVSpeechSynthesizer. With this API you can have iOS speak a phrase in the language of the text provided. Used correctly this can add a compelling level of user interaction and direction to your app.
The Utterance iOS Titanium module provides a simple to use API for interacting with AVSpeechSynthesizer in the simulator or on device.
In just a few lines of code you can speak a phrase in any supported language. Here is an example in Japanese.
First we require the module into our Titanium program
var utterance = require('bencoding.utterance');
Then we create a new Speech object
var speech = utterance.createSpeech();
Finally we call the startSpeaking method and provide the text we wish to have read.
speech.startSpeaking({
text:"こんにちは"
});
For a full list of the methods available please visit the project n github here.
Here the module in action
The following is a video showing the module in action.
How to get the module:
Image may be NSFW.
Clik here to view.

Clik here to view.
