Speech synthesis

Speech synthesis is the artificial production of human speech. A system used for this purpose is termed a speech synthesizer, and can be implemented in software or hardware. Speech synthesis systems are often called text-to-speech (TTS) systems in reference to their ability to convert text into speech. However, there exist systems that can only render symbolic linguistic representations like phonetic transcriptions into speech.

Contents

Overview of speech synthesis technology

A text-to-speech system (or engine) is composed of two parts: a front end and a back end. Broadly, the front end takes input in the form of text and outputs a symbolic linguistic representation. The back end takes the symbolic linguistic representation as input and outputs the synthesized speech waveform. The naturalness of a speech synthesizer usually refers to how much the output sounds like the speech of a real person. The intelligibility of a speech synthesizer refers to how easily the output can be understood.

The front end has two major tasks. First it takes the raw text and converts things like numbers and abbreviations into their written-out word equivalents. This process is often called text normalization, pre-processing, or tokenization. Then it assigns phonetic transcriptions to each word, and divides and marks the text into various prosodic units, like phrases, clauses, and sentences. The process of assigning phonetic transcriptions to words is called text-to-phoneme (TTP) or grapheme-to-phoneme (GTP) conversion. The combination of phonetic transcriptions and prosody information make up the symbolic linguistic representation output of the front end.

The other part, the back end, takes the symbolic linguistic representation and converts it into actual sound output. The back end is often referred to as the synthesizer. The different techniques synthesizers use are described below.

History

Long before modern electronic signal processing was invented, speech researchers tried to build machines to create human speech. Early examples of 'speaking heads' were made by Gerbert of Aurillac (d. 1003), Albertus Magnus (1198-1280), and Roger Bacon (1214-1294).

In 1779, Christian Kratzenstein of St. Petersburg built models of the human vocal tract that could produce the five long vowel sounds (a, e, i, o and u). This was followed by the bellows-operated 'Acoustic-Mechanical Speech Machine' by Wolfgang von Kempelen of Vienna, Austria, described in his 1791 paper Mechanismus der menschlichen Sprache nebst der Beschreibung seiner sprechenden Maschine ("mechanism of human speech with description of his speaking machine", J.B. Degen, Wien). This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. In 1837 Charles Wheatstone produced a 'speaking machine' based on von Kempelen's design, and in 1857 M. Faber built the 'Euphonia'. Wheatstone's design was resurrected in 1923 by Paget.

Missing image
Voder.jpg
Bell Labs's VODER was exhibited at the 1939 New York World's Fair and produced clearly intelligible speech.

In the 1930s, Bell Labs developed the VOCODER, a keyboard-operated electronic speech analyzer and synthesizer that was said to be clearly intelligible. Homer Dudley refined this device into the VODER, which he exhibited at the 1939 New York World's Fair.

Early electronic speech synthesizers sounded very robotic and were often barely intelligible. Output from contemporary TTS systems is sometimes indistinguishable from actual human speech.

Despite the success of electronic speech synthesis, research is still being conducted into mechanical speech synthesizers for use in humanoid robots. Even a perfect electronic synthesizer is limited by the quality of the transducer (usually a loudspeaker) that produces the sound, so in a robot a mechanical system may be able to produce a more natural sound than a small loudspeaker.

The first computer-based speech synthesis systems were created in the late 1950s and the first complete text-to-speech system was completed in 1968. Since then, there have been many advances in the technologies used to synthesize speech. See the examples below for state-of-the-art commercial and free text-to-speech systems.

References:

Synthesizer technologies

There are two main technologies used for the generating synthetic speech waveforms: concatenative synthesis and formant synthesis.

Concatenative synthesis

Concatenative synthesis is based on the concatenation (or stringing together) of segments of recorded speech. Generally, concatenative synthesis gives the most natural sounding synthesized speech. However, natural variation in speech and automated techniques for segmenting the waveforms sometimes result in audible glitches in the output, detracting from the naturalness. There are three main subtypes of concatenative synthesis:

  • Unit selection synthesis uses large speech databases (more than one hour of recorded speech). During database creation, each recorded utterance is segmented into some or all of the following: individual phones, syllables, morphemes, words, phrases, and sentences. Typically, the division into segments is done using a using a specially modified speech recognizer set to a "forced alignment" mode with some hand correction afterward, using visual representations such as the waveform and spectrogram. An index of the units in the speech database is then created based on the segmentation and acoustic parameters like the fundamental frequency (pitch), duration, position in the syllable, and neighboring phones. At runtime, the desired target utterance is created by determining the best chain of candidate units from the database (unit selection). This process is typically achieved using a specially-weighted decision tree. Unit selection gives the greatest naturalness due to the fact that it does not apply a large amount digital signal processing to the recorded speech, which often makes recorded speech sound less natural, although some systems may use a small amount of signal processing at the point of concatenation to smooth the waveform. In fact, output from the best unit selection systems is often indistinguishable from real human voices, especially in contexts for which the TTS system has been tuned. However, maximum naturalness often requires unit selection speech databases to be very large, in some systems ranging into the gigabytes of recorded data and numbering into the dozens of hours of recorded speech.
  • Diphone synthesis uses a minimal speech database containing all the Diphones (sound-to-sound transitions) occurring in a given language. The number of diphones depends on the phonotactics of the language: Spanish has about 800 diphones, German about 2500. In diphone synthesis, only one example of each diphone is contained in the speech database. At runtime, the target prosody of a sentence is superimposed on these minimal units by means of digital signal processing techniques such as Linear predictive coding, PSOLA or MBROLA. The quality of the resulting speech is generally not as good as that from unit selection but more natural-sounding than the output of formant synthesizers. Diphone synthesis suffers from the sonic glitches of concatenative synthesis and the robotic-sounding nature of formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in commercial applications is declining, although it continues to be used in research because there are a number of freely available implementations.
  • Domain-specific synthesis concatenates pre-recorded words and phrases to create complete utterances. It is used in applications where the variety of texts the system will output is limited to a particular domain, like transit schedule announcements or weather reports. This technology is very simple to implement, and has been in commercial use for a long time: this is the technology used by gadgets like talking clocks and calculators. The naturalness of these systems can potentially be very high because the variety of sentence types is limited and closely matches the prosody and intonation of the original recordings. However, because these systems are limited by the words and phrases in its database, they are not general-purpose and can only synthesize the combinations of words and phrases they have been pre-programmed with.

Formant synthesis

Formant synthesis does not use any human speech samples at runtime. Instead, the output synthesized speech is created using an acoustic model. Parameters such as fundamental frequency, voicing, and noise levels are varied over time to create a waveform of artificial speech. This method is sometimes called Rule-based synthesis but some argue that because many concatenative systems use rule-based components for some parts of the system, like the front end, the term is not specific enough.

Many systems based on formant synthesis technology generate artificial, robotic-sounding speech, and the output would never be mistaken for the speech of a real human. However, maximum naturalness is not always the goal of a speech synthesis system, and formant synthesis systems have some advantages over concatenative systems.

Formant synthesized speech can be very reliably intelligible, even at very high speeds, avoiding the acoustic glitches that can often plague concatenative systems. High speed synthesized speech is often used by the visually impaired for quickly navigating computers using a screen reader. Second, formant synthesizers are often smaller programs than concatenative systems because they do not have a database of speech samples. They can thus be used in embedded computing situations where memory space and processor power are often scarce. Last, because formant-based systems have total control over all aspects of the output speech, a wide variety of prosody or intonation can be output, conveying not just questions and statements, but a variety of emotions and tones of voice.

Other synthesis methods

  • Articulatory synthesis has been a synthesis method mostly of academic interest until recently. It is based on computational models of the human vocal tract and the articulation processes occurring there. Few of these models are currently sufficiently advanced or computationally efficient to be used in commercial speech synthesis systems. A notable exception is the NeXT-based system originally developed and marketed by Trillium Sound Research Inc, a Calgary, Alberta, Canada-based software spin-off company from the University of Calgary where much of the original research was conducted. Following the demise of the various incarnations of NeXT (started by Steve Jobs in the late 1980s and merged with Apple in 1997) the Trillium software was put out under a General Public Licence (GPL) -- see the GNU web site (http://www.gnu.org), with work continuing as gnuspeech -- a GNU project. The original NeXT software and recent ports of major portions of that software to both Mac OS/X and GNU/Linux GNUstep are available on the GNU savannah site (http://savannah.gnu.org/projects/gnuspeech) along with access to on-line manuals and papers relevant to the theoretical underpinnings of the work. The system, first marketed in 1994, provides full articulatory-based text-to-speech conversion using a waveguide or transmission-line analog of the human oral and nasal tracts controlled by Carré's Distinctive Region Model that is, in turn, based on work by Fant and others at the Stockholm Speech Technology Lab of the RIT on formant sensitivity analysis. This work showed that the formants in a resonant tube can be controlled by just eight parameters that correspond closely with the naturally available articulators in the human vocal tract. The system embodies a full pronouncing dictionary look-up togther with context sensitive rules for posture concatenation and parameter generation as well as models of rhythm and intonation derived from linguistic/phonological research.
  • Hybrid synthesis marries aspects of formant and concatenative synthesis to minimize the acoustic glitches when speech segments are concatenated.

Front-end challenges

Text normalization challenges

The process of normalizing text is rarely straightforward. Texts are full of homographs, numbers and abbreviations that all ultimately require expansion into a phonetic representation.

There are many words in English which are pronounced differently based on context. Some examples:

  • project: My latest project is to learn how to better project my voice.
  • bow: The girl with the bow in her hair was told to bow deeply when greeting her superiors.

Most TTS systems do not generate semantic representations of their input texts, as processes for doing so are not reliable, well-understood, or computationally effective. As a result, various heuristic techniques are used to guess the proper way to disambiguate homographs, like looking at neighboring words and using statistics about frequency of occurrence.

Deciding how to convert numbers is another problem TTS systems have to address. It is a fairly simple programming challenge to convert a number into words, like 1325 becoming "one thousand three hundred twenty-five". However, numbers occur in many different contexts in texts, and 1325 should probably be read as "thirteen twenty-five" when part of an address (1325 Main St.) and as "one three two five" if it is the last four digits of a social security number. Often a TTS system can infer how to expand a number based on surrounding words, numbers, and punctuation, and sometimes the systems provide a way to specify the type of context if it is ambiguous.

Similarly, abbreviations like "etc." are easily rendered as "et cetera", but often abbreviations can be ambiguous. For example, the abbreviation "in." in the following example: "Yesterday it rained 3 in. Take 1 out, then put 3 in." "St." can also be ambiguous: "St. John St." TTS systems with intelligent front ends can make educated guesses about how to deal with ambiguous abbreviations, while others do the same thing in all cases, resulting in nonsensical but sometimes comical outputs: "Yesterday it rained three in." or "Take one out, then put three inches."

Text-to-phoneme challenges

Speech synthesis systems use two basic approaches to determine the pronunciation of a word based on its spelling, a process which is often called text-to-phoneme or grapheme-to-phoneme conversion, as phoneme is the term used by linguists to describe distinctive sounds in a language.

The simplest approach to text-to-phoneme conversion is the dictionary-based approach, where a large dictionary containing all the words of a language and their correct pronunciation is stored by the program. Determining the correct pronunciation of each word is a matter of looking up each word in the dictionary and replacing the spelling with the pronunciation specified in the dictionary.

The other approach used for text-to-phoneme conversion is the rule-based approach, where rules for the pronunciations of words are applied to words to work out their pronunciations based on their spellings. This is similar to the "sounding out" approach to learning reading.

Each approach has advantages and drawbacks. The dictionary-based approach has the advantages of being quick and accurate, but it completely fails if it is given a word which is not in its dictionary, and as dictionary size grows, so too does the memory space requirements of the synthesis system. On the other hand, the rule-based approach works on any input, but the complexity of the rules grows substantially as it takes into account irregular spellings or pronunciations. As a result, nearly all speech synthesis systems use a combination of both approaches.

Some languages, like Spanish, have a very regular writing system, and the prediction of the pronunciation of words based on the spelling works correctly in nearly all instances. Speech synthesis systems for languages like this often use the rule-based approach as the core approach for text-to-phoneme conversion, resorting to dictionaries only for those few words, like foreign names and borrowings, whose pronunciation is not obvious from the spelling. On the other hand, speech synthesis for languages like English, which have extremely irregular spelling systems, often rely mostly on dictionaries and use rule-based approaches only for unusual words or names that aren't in the dictionary.

Speech synthesis markup languages

A number of markup languages have been established for rendition of text as speech in an XML compliant format, the most recent being SSML proposed by the W3C which is in draft status at the time of this writing. Older speech synthesis markup languages include SABLE and JSML. Although each of these was proposed as a new standard, still none of them has been widely adopted.

A subset of the Cascading Style Sheets 2 specification includes Aural Cascading Style Sheets.

Speech synthesis markup languages should be distinguished from dialogue markup languages such as VoiceXML, which includes, in addition to text-to-speech markup, tags related to speech recognition, dialogue management and touchtone dialing.

See also

External links

Misc

  • Samples (http://www.tmaa.com/tts/comparison_USEng_highres.htm) of commercial TTS systems.
  • Free (http://www.chipspeaking.com) Speech Synthesis system designed for the vocally impaired, with links to other speech related assistive technologies and resources for PALS.
  • Speech Synthesis & Analysis Software (http://www.bright.net/~dlphilp/linuxsound/one-page.html#speech)
  • comp.speech Frequently Asked Questions (http://www.speech.cs.cmu.edu/comp.speech/)
  • Free TTS Audio Books (http://freeclassicaudiobooks.com) Free audio book downloads created with NeoSpeech voices

Freely available TTS systems

  • Festival (http://www.cstr.ed.ac.uk/projects/festival/) is a freely available complete diphone concatenation and unit selection TTS system.
  • Flite (http://www.speech.cs.cmu.edu/flite/) (Festival-lite) is a smaller, faster alternative version of Festival designed for embedded systems and high volume servers.
  • FreeTTS (http://freetts.sourceforge.net/docs/index.php) written entirely in Java, based on Flite.
  • MBROLA (http://tcts.fpms.ac.be/synthesis/mbrola.html) is a freely available diphone concatenation system (back end).
  • Gnuspeech (http://www.gnu.org/software/gnuspeech/) is an extensible, text-to-speech package, based on real-time, articulatory, speech-synthesis-by-rules.
  • Epos (http://epos.ure.cas.cz/) is a rule-driven TTS system primarily designed to serve as a research tool. It suports czech and slovak
  • HTS (http://hts.ics.nitech.ac.jp/) is a freely available HMM-based speech synthesis system (back end).

Commercially available TTS systems

da:Talesyntese de:Sprachsynthese eo:Parolsintezo es:Sintetización del habla fr:Synthèse vocale zh:语音合成

Navigation

  • Art and Cultures
    • Art (https://academickids.com/encyclopedia/index.php/Art)
    • Architecture (https://academickids.com/encyclopedia/index.php/Architecture)
    • Cultures (https://www.academickids.com/encyclopedia/index.php/Cultures)
    • Music (https://www.academickids.com/encyclopedia/index.php/Music)
    • Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
  • Biographies (http://www.academickids.com/encyclopedia/index.php/Biographies)
  • Clipart (http://www.academickids.com/encyclopedia/index.php/Clipart)
  • Geography (http://www.academickids.com/encyclopedia/index.php/Geography)
    • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
    • Maps (http://www.academickids.com/encyclopedia/index.php/Maps)
    • Flags (http://www.academickids.com/encyclopedia/index.php/Flags)
    • Continents (http://www.academickids.com/encyclopedia/index.php/Continents)
  • History (http://www.academickids.com/encyclopedia/index.php/History)
    • Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
    • Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
    • Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
    • Prehistory (http://www.academickids.com/encyclopedia/index.php/Prehistory)
    • Renaissance (http://www.academickids.com/encyclopedia/index.php/Renaissance)
    • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
    • United States (http://www.academickids.com/encyclopedia/index.php/United_States)
    • Wars (http://www.academickids.com/encyclopedia/index.php/Wars)
    • World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
  • Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
  • Mathematics (http://www.academickids.com/encyclopedia/index.php/Mathematics)
  • Reference (http://www.academickids.com/encyclopedia/index.php/Reference)
  • Science (http://www.academickids.com/encyclopedia/index.php/Science)
    • Animals (http://www.academickids.com/encyclopedia/index.php/Animals)
    • Aviation (http://www.academickids.com/encyclopedia/index.php/Aviation)
    • Dinosaurs (http://www.academickids.com/encyclopedia/index.php/Dinosaurs)
    • Earth (http://www.academickids.com/encyclopedia/index.php/Earth)
    • Inventions (http://www.academickids.com/encyclopedia/index.php/Inventions)
    • Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
    • Plants (http://www.academickids.com/encyclopedia/index.php/Plants)
    • Scientists (http://www.academickids.com/encyclopedia/index.php/Scientists)
  • Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
    • Anthropology (http://www.academickids.com/encyclopedia/index.php/Anthropology)
    • Economics (http://www.academickids.com/encyclopedia/index.php/Economics)
    • Government (http://www.academickids.com/encyclopedia/index.php/Government)
    • Religion (http://www.academickids.com/encyclopedia/index.php/Religion)
    • Holidays (http://www.academickids.com/encyclopedia/index.php/Holidays)
  • Space and Astronomy
    • Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
    • Planets (http://www.academickids.com/encyclopedia/index.php/Planets)
  • Sports (http://www.academickids.com/encyclopedia/index.php/Sports)
  • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
  • Weather (http://www.academickids.com/encyclopedia/index.php/Weather)
  • US States (http://www.academickids.com/encyclopedia/index.php/US_States)

Information

  • Home Page (http://academickids.com/encyclopedia/index.php)
  • Contact Us (http://www.academickids.com/encyclopedia/index.php/Contactus)

  • Clip Art (http://classroomclipart.com)
Toolbox
Personal tools