successful attempts to exploit recent advances in multimedia suer interface such as interactive computer graphics & speech processing... the project completed a detailed investigation of how such multimedia schemes can be exploited for pronounciation teaching for hearing impaired people
stimulating and motivational material
bilingual-whole interface designed to be
EXTRA: INDIANA SPEECH TRAINING - ISTRA The path of speech technologies in computer assisted language learning: from ...
By V. Melissa Holland, F. Peter Fisher
stuff on HMM and common errors with training the corpus
NOTES (from meeting):it would be a good idea to integrate it with the primary school cirriculum-can download the primary school books online so can check them-try to as extension work from vocab learnt in school class room-as an optional extension to do extra work at home if they wanted.
Check school ciriculum for first and second class…APPLY IT! And test to see if it worked/was helpful; in schools?
There are 2 types of learning : expressive and comprehension/understanding I’m aiming at expressive so first and second class level Before that they just learn the actual words, but im interested in their use of it
Books that may be of use:
Cead focal, the first hundred words
Buntus foclora: a children’s irish picture dictionary
Phonemes that are unique to Irish – get native speakers to pronounce
For an articulation game (not language), the apps in existence already measure pitch & loudness
GOOD WAY OF LEARNING WOULD BE TO:
To have a range of acceptability for each word rather than a straight-cut right or no & if there was a way of testing what they did incorrectly
"it's enough if I can record it, replay it and overwrite it again if the user records another sound clip.
So far I gather this is impossible in Flash without the use of a mediaserver. Then again if I'm right the only way a media server can help me is by actually streaming the microphone input to the mediaserver, actually storing the sound as an mp3 file or similar, and then serving the mp3 file back to the app. It seems a bit overkill for just replaying short soundclips while the app is up, soundclips which are going to be re-recorded over and over while the user is using the app and trashed when he stops using the app.
I'm afraid not. I spend some time looking around and there seems to be no way to do this purely with Flash alone. As far as I can tell the only way to record audio is by streaming the microphone input to a media server such as adobe's own media server or an open source alternative like Red5 and then play back the mp3 stored by the server.
Quite an ugly sollution so we ended up building a simple java applet to take care of the temporary recording. ( HYPERLINK "http://www.jsresources.org/examples/audio_playing_recording.html" http://www.jsresources.org/examples/aud ... rding.html )"
"Could use java applet (but would have privacy issues)
Purpose. Plays a single audio file. Capable of playing some compressed audio formats (A-law, μ-law, maybe ogg vorbis, mp3, GSM06.10). Allows control over buffering and which mixer to use. Usage:
"My team is currently developing a series of interactive speech recognition application. One of the applications requires us to create a web front end that allows us to record audio from a user microphone and return it to the server... We decided to use Flash and quickly found that we could not extract the audio from our recorded FLV files...But we simply could not get the audio out of the files that were being streamed to our Flash Media Server. "
"We discovered that all files converted from another format to FLV store audio in an embedded MP3. Unfortunately, all FLV files recorded from the user’s microphone in by the Flash Player use the Nellymoser audio format. Nellymoser is a highly proprietary mono audio format designed solely for streaming speech. When we looked for a program to decompress this converter we found that Nellymoser offered a converter for $7,500."
"one other converter that would do our decoding, the Total Video Converter. for only $50"
"You can convert the flv to many different audio or video formats... To convert an FLV that contains video or video and audio, but not audio only, you may use the GUI which is self explanatory.Audio only clips can not be converted with the GUI at this time. (The application simply locks up when we try to convert Nellymoseraudio only FLV files)."
Therefore to record and extract audio from a user's mic it is necessary to record video as well &
this can be done using the following code:
varbandwidth:int = 0;// Specifies the maximum amount of bandwidth that the current outgoing video feed can use, in bytes per second. The default value is 16384. varquality:int = 50;// this value is 0-100 with 1 being the lowest quality. var camera:Camera = Camera.getCamera(); camera.setQuality(bandwidth, quality); camera.setMode(320,240,15,true); // setMode(videoWidth, videoHeight, video fps, favor area)
// Now attach the webcam stream to a video object. var video:Video = newVideo(); video.attachCamera(camera); addChild(video);
Depending on the project, you can change bandwidth, quality, and frame-rate settings to find the best combination.
Listen to word and watch mouth movement...(Maybe a zoom in option for close-up of visemes)
Repeat Lesson
Volume Level Pan
Next Number
Previous Button
Back to Main Menu
S&Ls Button on the last frame of lesson
Understand? Button (help) which brings up English translation under the Irish word
ELEMENTS ON SCREEN:
Figure of the number
Object to associate number with (Eg: petals on a flower)
Irish Word & English translation under (set to invisible until help is requested) spell in sky!
Simple animation to illustrate meaning of word
"Each topic will have a lesson explaining the words in that vocabulary set, with their accompanying images. These combined with the interactive animated character create an optimal learning enviroment, also allowing for the user to check and recheck how to shape their mouths and how a word should sound." Project Proposal July, 09.
ADDITIONAL FEATURES:
Accompanying song for revision style animation at the end of the lesson, prior to option of S&Ls game. (similar to those on youtube.com)
Code for sound in Flash: var snd:Sound = new Sound(); snd.load(new URLRequest("aHaon.mp3"));
var channel:SoundChannel = new SoundChannel(); channel = snd.play();
snd.addEventListener(IOErrorEvent.IO_ERROR, onIOError, false, 0, true); function onIOError(evt:IOErrorEvent):void{ trace("An error occurred when loading the sound;", evt.text); }
//An event listener to ensure sound only plays once fully loaded. snd.addEventListener(Event.COMPLETE, onLoadComplete, false, 0, true); function onLoadComplete(evt:Event):void{ var localSnd:Sound = evt.target as Sound; channel = localSnd.play(); }
Synched the character counting to 10 & adding accompanying simple animations to depict meaning.
Phonemes and Visemes: No discussion of facial animation is possible without discussing phonemes. Jake Rodgers’s article “Animating Facial Expressions” (Game Developer, November 1998) defined a phoneme as an abstract unit of the phonetic system of a language that corresponds to a set of similar speech sounds. More simply, phonemes are the individual sounds that make up speech. A naive facial animation system may attempt to create a separate facial position for each phoneme. However, in English (at least where I speak it) there are about 35 phonemes. Other regional dialects may add more.
Now, that’s a lot of facial positions to create and keep organized. Luckily, the Disney animators realized a long time ago that using all phonemes was overkill. When creating animation, an artist is not concerned with individual sounds, just how the mouth looks while making them. Fewer facial positions are necessary to visually represent speech since several sounds can be made with the same mouth position. These visual references to groups of phonemes are called visemes. How do I know which phonemes to combine into one viseme? Disney animators relied on a chart of 12 archetypal mouth positions to represent speech as you can see in Figure 1.
Figure 1. The 12 classic Disney mouth positions.
Each mouth position or viseme represented one or more phonemes. I have seen many facial animation guidelines with different numbers of visemes and different organizations of phonemes. They all seem to be similar to the Disney 12, but also seem like they involved animators talking to a mirror and doing some guessing.
Along with the animator’s eye for mouth positions, there are the more scientific models that reduce sounds into visual components. For the deaf community, which does not hear phonemes, spoken language recognition relies entirely on lip reading. Lip-reading samples base speech recognition on 18 speech postures. Some of these mouth postures show very subtle differences that a hearing individual may not see.
SAPI provides the programmer with a very powerful feature - viseme notification. A viseme refers to the mouth position currently being "used" by the speaker. SAPI 5 uses the Disney 13 visemes:
typedef enum SPVISEMES
{
// English examples
//------------------
SP_VISEME_0 = 0,// silence
SP_VISEME_1,// ae, ax, ah
SP_VISEME_2,// aa
SP_VISEME_3,// ao
SP_VISEME_4,// ey, eh, uh
SP_VISEME_5,// er
SP_VISEME_6,// y, iy, ih, ix
SP_VISEME_7,// w, uw
SP_VISEME_8,// ow
SP_VISEME_9,// aw
SP_VISEME_10,// oy
SP_VISEME_11,// ay
SP_VISEME_12,// h
SP_VISEME_13,// r
SP_VISEME_14,// l
SP_VISEME_15,// s, z
SP_VISEME_16,// sh, ch, jh, zh
SP_VISEME_17,// th, dh
SP_VISEME_18,// f, v
SP_VISEME_19,// d, t, n
SP_VISEME_20,// k, g, ng
SP_VISEME_21,// p, b, m
} SPVISEMES;
Everytime a viseme is used, the SAPI5 engine can send your application a notification which it can use draw the mouth position. Microsoft provides an excellent example of this with its SAPI5 SDK, called TTSApp. TTSApp is written using the standard Win32 SDK and has additional features that bog down the code. Therefore, I created my own version using MFC that is hopefully a little easier to understand.
Using visemes is relatively simple, it is the graphical side of it that is the hard part. This is why, for demonstration purposes, I used the microphone character that was used in the TTSApp
3. [r*] & [book] - Rounded open lips with corner of lips slightly puckered. If you look at Chart 1, [r] is made in the same place in the mouth as the sounds of #7 below. One of the attributes not denoted in the chart is lip rounding. If [r] is at the beginning of a word, then it fits here. Try saying “right” vs. “car.”
4. [v] & [f ] - Lower lip drawn up to upper teeth.
5. [thy] & [thigh] - Tongue between teeth, no gaps on sides.
6. [l] - Tip of tongue behind open teeth, gaps on sides.
7. [d,t,z,s,r*,n] - Relaxed mouth with mostly closed teeth with pinkness of tongue behind teeth (tip of tongue on ridge behind upper teeth).
8. [vision, shy, jive, chime] Slightly open mouth with mostly closed teeth and corners of lips slightly tightened.
9. [y, g, k, hang, uh-oh] - Slightly open mouth with mostly closed teeth.
10. [beat, bit] - Wide, slightly open mouth.
11. [bait, bet, but] - Neutral mouth with slightly parted teeth and slightly dropped jaw.
12. [boat] - very round lips, slight dropped jaw.
13. [bat, bought] - open mouth with very dropped jaw."
Emphasis and exaggeration are also very important in animation. You may wish to punch up a sound by the use of a viseme to punctuate the animation. This emphasis along with the addition of secondary animation to express emotion is key to a believable sequence. In addition to these viseme frames, you will want to have a neutral frame that you can use for pauses. In fast speech, you may not want to add the neutral frame between all words, but in general it gives good visual cues to sentence boundaries
The list of phonemes above corresponds with the Disney visemes.
I have designed Disney equivalent of visemes:
Corresponding with phonemes:
(From left to right across)
1)AH,H,I,A..ending TA, 2)BA,B,MMM,P, 3)CGJKSZTS, 4) E, 5) O, 6) OOO, 7) Q U, 8) R, 9) silence, 10) start-D,T,THA,LA.11) ending-L,N, START-WHA,Y & 12) vee,f,fah
I had to ensure that all elements of the face are identical in each viseme.This will aid animation later.
I designed the nose first and then the lips around that.
I decided to omit the eyes from this part of the animation as they don't particularly add to the mouth shape guide.
Hopefully the character will be totally interactive (ie: user can choose appearance), and so the design of the eyes could be chosen from a range of pre-designed eyes.
With these visemes prepared for animation there's only one thing to do... Get animating! Synch up to sound, and deal with sound approproiately (ie:error catching, ensuring it only plays once fully loaded, use of channels etc).
Extra Reading:
Prototyping and Transforming Visemes for Animated Speech
Bernard Tiddeman and David Perrett
School of Computer Science and School of Psychology,
University of St Andrews, Fife KY16 9JU.
Numbers 1-20, each uttered 5 times, once with each syllable pronounced separately.
List of colours (white, black, blue, yellow, red, orange, green, purple, brown, grey, pink), each uttered 5 times, once with each syllable pronounced separately.
Animals (cow, bird, dog, cat, sheep, bee, frog, cuckoo, chicken, lion), each uttered 5 times, once with each syllable pronounced separately. Also, 3 recordings of each of the animal sounds, with variations of each, if necessary.
Body Parts (arms, hands, fingers, shoulders, legs, knees, feet, toes, chest, stomach, neck, head, back), each uttered 5 times, once with each syllable pronounced separately.
List of Words:
Numbers:
Aon
Dó
Trí
Ceathar
Cúig
Sé
Seacht
Ocht
Naoi
Deich
Aon Déag
Dó Dhéag
Trí Dhéag
Ceathar Dhéag
Cúig Déag
Sé Déag
Seacht Déag
Ocht Déag
Naoi Déag
Fiche
Colours:
bán
dubh
gorm
buí
dearg
oráiste
glas
corchradh
donn
liath
bán-dearg
Animals & Associated Sounds:
bómoo
éantweet
madrawoof
catmeow
caoirebaa
beachbuzz
frogacroak
cuachcuckoo
sicíncluck
leonroar
Some introductions and various phrases in both English and Irish also.
An acoustically isolated Live Room of professional standard which has a clear-glass window allowing visual communication between the Live Room and the Control Room where the producer works from. With a Digidesign HD setup and using ProTools HD 7.3 it was used to record 24-bit audio at 44.1kHz in .wav format.
Mic Used: U87 Ai Neumen
It is equipped with a large dual-diaphragm capsule with three directional patterns: omnidirectional, cardioid and figure-8. These are selectable with a switch below the headgrille. I chose the cardioid as I was in a static position throughout the entire recording, It has a 10 dB attenuation switch is located on the rear. It enables the microphone to handle sound pressure levels up to 127 dB without distortion. The U 87 Ai can be used as a main microphone for orchestra recordings, as a spot mic for single instruments, and extensively as a vocal microphone for all types of music and speech. As can be seen from the accompanying images, a pop shield was used in conjunction with the U87 Ai. These are typically used in recording studios and serve as a noise protection filter for microphones; preventing interference and protecting the mic from saliva.
Post-Production
·The files were recorded using the .wav lossless format using using ProTools HD 7.3. It was necessary in post-production to convert all the recordings to .mp3 as Flash does not always accept .wav files. It will only accept WAV PCM files which are large files and use up unnecessary hard-drive (or in this case, server) space. Using .mp3 files here saves *on both space and computation.*
Although there are various sound file formats used to encode digital audio, ActionScript 3.0 and Flash Player support sound files that are stored in the mp3 format. They cannot directly load or play sound files in other formats like WAV or AIFF."
·It was then necessary to edit and then organize the audio files into folders. This was done using Audacity.