Listen to word and watch mouth movement...(Maybe a zoom in option for close-up of visemes)
Repeat Lesson
Volume Level Pan
Next Number
Previous Button
Back to Main Menu
S&Ls Button on the last frame of lesson
Understand? Button (help) which brings up English translation under the Irish word
ELEMENTS ON SCREEN:
Figure of the number
Object to associate number with (Eg: petals on a flower)
Irish Word & English translation under (set to invisible until help is requested) spell in sky!
Simple animation to illustrate meaning of word
"Each topic will have a lesson explaining the words in that vocabulary set, with their accompanying images. These combined with the interactive animated character create an optimal learning enviroment, also allowing for the user to check and recheck how to shape their mouths and how a word should sound." Project Proposal July, 09.
ADDITIONAL FEATURES:
Accompanying song for revision style animation at the end of the lesson, prior to option of S&Ls game. (similar to those on youtube.com)
Code for sound in Flash: var snd:Sound = new Sound(); snd.load(new URLRequest("aHaon.mp3"));
var channel:SoundChannel = new SoundChannel(); channel = snd.play();
snd.addEventListener(IOErrorEvent.IO_ERROR, onIOError, false, 0, true); function onIOError(evt:IOErrorEvent):void{ trace("An error occurred when loading the sound;", evt.text); }
//An event listener to ensure sound only plays once fully loaded. snd.addEventListener(Event.COMPLETE, onLoadComplete, false, 0, true); function onLoadComplete(evt:Event):void{ var localSnd:Sound = evt.target as Sound; channel = localSnd.play(); }
Synched the character counting to 10 & adding accompanying simple animations to depict meaning.
Phonemes and Visemes: No discussion of facial animation is possible without discussing phonemes. Jake Rodgers’s article “Animating Facial Expressions” (Game Developer, November 1998) defined a phoneme as an abstract unit of the phonetic system of a language that corresponds to a set of similar speech sounds. More simply, phonemes are the individual sounds that make up speech. A naive facial animation system may attempt to create a separate facial position for each phoneme. However, in English (at least where I speak it) there are about 35 phonemes. Other regional dialects may add more.
Now, that’s a lot of facial positions to create and keep organized. Luckily, the Disney animators realized a long time ago that using all phonemes was overkill. When creating animation, an artist is not concerned with individual sounds, just how the mouth looks while making them. Fewer facial positions are necessary to visually represent speech since several sounds can be made with the same mouth position. These visual references to groups of phonemes are called visemes. How do I know which phonemes to combine into one viseme? Disney animators relied on a chart of 12 archetypal mouth positions to represent speech as you can see in Figure 1.
Figure 1. The 12 classic Disney mouth positions.
Each mouth position or viseme represented one or more phonemes. I have seen many facial animation guidelines with different numbers of visemes and different organizations of phonemes. They all seem to be similar to the Disney 12, but also seem like they involved animators talking to a mirror and doing some guessing.
Along with the animator’s eye for mouth positions, there are the more scientific models that reduce sounds into visual components. For the deaf community, which does not hear phonemes, spoken language recognition relies entirely on lip reading. Lip-reading samples base speech recognition on 18 speech postures. Some of these mouth postures show very subtle differences that a hearing individual may not see.
SAPI provides the programmer with a very powerful feature - viseme notification. A viseme refers to the mouth position currently being "used" by the speaker. SAPI 5 uses the Disney 13 visemes:
typedef enum SPVISEMES
{
// English examples
//------------------
SP_VISEME_0 = 0,// silence
SP_VISEME_1,// ae, ax, ah
SP_VISEME_2,// aa
SP_VISEME_3,// ao
SP_VISEME_4,// ey, eh, uh
SP_VISEME_5,// er
SP_VISEME_6,// y, iy, ih, ix
SP_VISEME_7,// w, uw
SP_VISEME_8,// ow
SP_VISEME_9,// aw
SP_VISEME_10,// oy
SP_VISEME_11,// ay
SP_VISEME_12,// h
SP_VISEME_13,// r
SP_VISEME_14,// l
SP_VISEME_15,// s, z
SP_VISEME_16,// sh, ch, jh, zh
SP_VISEME_17,// th, dh
SP_VISEME_18,// f, v
SP_VISEME_19,// d, t, n
SP_VISEME_20,// k, g, ng
SP_VISEME_21,// p, b, m
} SPVISEMES;
Everytime a viseme is used, the SAPI5 engine can send your application a notification which it can use draw the mouth position. Microsoft provides an excellent example of this with its SAPI5 SDK, called TTSApp. TTSApp is written using the standard Win32 SDK and has additional features that bog down the code. Therefore, I created my own version using MFC that is hopefully a little easier to understand.
Using visemes is relatively simple, it is the graphical side of it that is the hard part. This is why, for demonstration purposes, I used the microphone character that was used in the TTSApp
3. [r*] & [book] - Rounded open lips with corner of lips slightly puckered. If you look at Chart 1, [r] is made in the same place in the mouth as the sounds of #7 below. One of the attributes not denoted in the chart is lip rounding. If [r] is at the beginning of a word, then it fits here. Try saying “right” vs. “car.”
4. [v] & [f ] - Lower lip drawn up to upper teeth.
5. [thy] & [thigh] - Tongue between teeth, no gaps on sides.
6. [l] - Tip of tongue behind open teeth, gaps on sides.
7. [d,t,z,s,r*,n] - Relaxed mouth with mostly closed teeth with pinkness of tongue behind teeth (tip of tongue on ridge behind upper teeth).
8. [vision, shy, jive, chime] Slightly open mouth with mostly closed teeth and corners of lips slightly tightened.
9. [y, g, k, hang, uh-oh] - Slightly open mouth with mostly closed teeth.
10. [beat, bit] - Wide, slightly open mouth.
11. [bait, bet, but] - Neutral mouth with slightly parted teeth and slightly dropped jaw.
12. [boat] - very round lips, slight dropped jaw.
13. [bat, bought] - open mouth with very dropped jaw."
Emphasis and exaggeration are also very important in animation. You may wish to punch up a sound by the use of a viseme to punctuate the animation. This emphasis along with the addition of secondary animation to express emotion is key to a believable sequence. In addition to these viseme frames, you will want to have a neutral frame that you can use for pauses. In fast speech, you may not want to add the neutral frame between all words, but in general it gives good visual cues to sentence boundaries
The list of phonemes above corresponds with the Disney visemes.
I have designed Disney equivalent of visemes:
Corresponding with phonemes:
(From left to right across)
1)AH,H,I,A..ending TA, 2)BA,B,MMM,P, 3)CGJKSZTS, 4) E, 5) O, 6) OOO, 7) Q U, 8) R, 9) silence, 10) start-D,T,THA,LA.11) ending-L,N, START-WHA,Y & 12) vee,f,fah
I had to ensure that all elements of the face are identical in each viseme.This will aid animation later.
I designed the nose first and then the lips around that.
I decided to omit the eyes from this part of the animation as they don't particularly add to the mouth shape guide.
Hopefully the character will be totally interactive (ie: user can choose appearance), and so the design of the eyes could be chosen from a range of pre-designed eyes.
With these visemes prepared for animation there's only one thing to do... Get animating! Synch up to sound, and deal with sound approproiately (ie:error catching, ensuring it only plays once fully loaded, use of channels etc).
Extra Reading:
Prototyping and Transforming Visemes for Animated Speech
Bernard Tiddeman and David Perrett
School of Computer Science and School of Psychology,
University of St Andrews, Fife KY16 9JU.
Numbers 1-20, each uttered 5 times, once with each syllable pronounced separately.
List of colours (white, black, blue, yellow, red, orange, green, purple, brown, grey, pink), each uttered 5 times, once with each syllable pronounced separately.
Animals (cow, bird, dog, cat, sheep, bee, frog, cuckoo, chicken, lion), each uttered 5 times, once with each syllable pronounced separately. Also, 3 recordings of each of the animal sounds, with variations of each, if necessary.
Body Parts (arms, hands, fingers, shoulders, legs, knees, feet, toes, chest, stomach, neck, head, back), each uttered 5 times, once with each syllable pronounced separately.
List of Words:
Numbers:
Aon
Dó
Trí
Ceathar
Cúig
Sé
Seacht
Ocht
Naoi
Deich
Aon Déag
Dó Dhéag
Trí Dhéag
Ceathar Dhéag
Cúig Déag
Sé Déag
Seacht Déag
Ocht Déag
Naoi Déag
Fiche
Colours:
bán
dubh
gorm
buí
dearg
oráiste
glas
corchradh
donn
liath
bán-dearg
Animals & Associated Sounds:
bómoo
éantweet
madrawoof
catmeow
caoirebaa
beachbuzz
frogacroak
cuachcuckoo
sicíncluck
leonroar
Some introductions and various phrases in both English and Irish also.
An acoustically isolated Live Room of professional standard which has a clear-glass window allowing visual communication between the Live Room and the Control Room where the producer works from. With a Digidesign HD setup and using ProTools HD 7.3 it was used to record 24-bit audio at 44.1kHz in .wav format.
Mic Used: U87 Ai Neumen
It is equipped with a large dual-diaphragm capsule with three directional patterns: omnidirectional, cardioid and figure-8. These are selectable with a switch below the headgrille. I chose the cardioid as I was in a static position throughout the entire recording, It has a 10 dB attenuation switch is located on the rear. It enables the microphone to handle sound pressure levels up to 127 dB without distortion. The U 87 Ai can be used as a main microphone for orchestra recordings, as a spot mic for single instruments, and extensively as a vocal microphone for all types of music and speech. As can be seen from the accompanying images, a pop shield was used in conjunction with the U87 Ai. These are typically used in recording studios and serve as a noise protection filter for microphones; preventing interference and protecting the mic from saliva.
Post-Production
·The files were recorded using the .wav lossless format using using ProTools HD 7.3. It was necessary in post-production to convert all the recordings to .mp3 as Flash does not always accept .wav files. It will only accept WAV PCM files which are large files and use up unnecessary hard-drive (or in this case, server) space. Using .mp3 files here saves *on both space and computation.*
Although there are various sound file formats used to encode digital audio, ActionScript 3.0 and Flash Player support sound files that are stored in the mp3 format. They cannot directly load or play sound files in other formats like WAV or AIFF."
·It was then necessary to edit and then organize the audio files into folders. This was done using Audacity.
A Handbook for language engineers http://209.85.229.132/search?q=cache:tMMcD2RCkRcJ:www.cogsci.ed.ac.uk/~osborne/csli.pdf+Frederick+Jelinek%27s+book,+Statistical+Methods+for+Speech+Recognition+download&cd=24&hl=en&ct=clnk&gl=ie&client=firefox-a
2000: Speech Interfaces for Games by François Dominic Laramée http://www.gignews.com/fdlspeech1.htm
All spent trying to install and use both Sphinx & HTK open-source voice recognition software. Eventually I got Sphinx installed & working. It seems very inaccurate... all the demos I used returned wrong answers. Eg: 'Hello John'... returning value such as 'ah ah two'.
3 weeks in total were spent attempting to use, install & run this software.