Lots of text-to-speech service providers use AI voices from Amazon Polly, Microsoft Azure, and Google Cloud Platform.
But these voices sound best when you use them in BeyondWords.*
Why?
Itâs thanks to our natural language processing algorithms.
Setting voices up for success
AI voices can interpret text in two formats: plain text, or speech synthesis markup language (SSML).
SSML tags provide extra information to the AI voice, clarifying pronunciations and improving speech flow. Using SSML therefore ensures a higher-quality voice output.
Some text-to-speech services only allow you to input plain text, meaning you canât achieve higher-quality outputs through SSML.
Others, like Amazon Polly, give you the option to manually insert SSML tags. But this is complex and time-consuming. Letâs say the voice is mispronouncing âJoe Bidenâ. Fixing this requires an understanding not only of SSML, but of the international phonetic alphabet (IPA) â symbols that linguists use to represent speech sounds.
<phoneme alphabet="ipa" ph="'dĘÉĘ baÉŞdÉn">Joe Biden</phoneme>
This is not feasible for the majority of publishers.
BeyondWords, on the other hand, adds the SSML tags for you.
Whether it is imported from your website in HTML format or manually imported as plain text, your content is automatically converted into SSML before being processed by the AI voice.
This is made possible by a layer of natural language processing (NLP) algorithms, which can programmatically read, analyze, and interpret written language.
How our NLP works
Our NLP uses a combination of rule-based and neural networkâbased techniques. Our deep learning models are trained on large data sets, which allow them to âlearnâ how humans convert particular text elements into speech and how this differs depending on context. This is particularly useful for resolving ambiguities in text.
For example, in the sentence âI read the bookâ, our NLP identifies, through contextual features, that the homograph âreadâ is most likely being used in the past tense, and so should be pronounced like âredâ (/rÉd/) as opposed to âreedâ (/riËd/). It applies the <phoneme> SSML tag accordingly, ensuring the AI voice gives the correct output.
<s>I <phoneme alphabet="ipa" ph="'rÉd">read</phoneme> the book yesterday.</s>
Also consider non-standard elements, such as dates. Our system can determine whether to read a number as cardinal (e.g. twenty nine) or ordinal (e.g. twenty ninth) numeral based on the usage context and other features.
Without this NLP and SSML layer, the AI voice may not predict which pronunciation of an element is correct and simply output a naive âbest guessâ. This is why many text-to-speech systems struggle with ambiguities.
A customizable and evolving NLP
Our team of computational linguists make iterative and domain-specific improvements to the NLP. This means that voice outputs evolve even when the AI voices behind them stay the same, and can adapt to the needs of each BeyondWords user.
Take the abbreviation âNLPâ, for instance. In the context of this article, it refers to ânatural language processingâ. But it is also the airport code for Nelspruit, South Africa. In medicine, it can be shorthand for âno light perceptionâ. With custom-built text normalization rules, we can ensure that our system delivers the most relevant result for a particular publisher.
You can even add your own text-to-speech rules.
Text-to-speech providers without a post-optimization NLP layer have no way to efficiently extend or make domain-specific customizations to an existing voice. If they wish to correct conversion errors, they must retrain the voice itself â something that comes at great cost and cannot guarantee accuracy, especially when it comes to unusual and idiosyncratic text-to-speech conversions.
Here are some more examples of what our NLP can do:
- Apply
<phoneme>
SSML tags to ensure correct pronunciation of novel or complex words - Apply
<lang>
SSML tags to ensure foreign words are pronounced accurately - Apply
<sub>
SSML tags to ensure symbols, abbreviations, and acronyms are pronounced properly - In HTML:
- Identify tweets embedded within âblockquoteâ elements with âtwitter-tweetâ attributes, then fetch, clean, and read the content of the tweet
- Remove elements that shouldnât be read aloud, such as image captions, so that theyâre not processed into audio
All of this happens automatically, and has very little impact on production speeds â your text will typically be processed into audio within a couple of minutes.
The full package for digital publishers
BeyondWords is a popular alternative to Amazon Polly, Google Read Aloud Player, and other text-to-speech service providers not only because of our customizable NLP and automatic SSML tagging, but because of our audio CMS.
While our CMS integrations and Text-to-Speech Editor make it easy to create audio, our embeddable players, shareable URLs, and podcast feeds make distribution near-effortless. Publishers also have the option to monetize and analyze their audio.
All users get access to advanced AI voices from Amazon, Microsoft, and Google, while subscribers can request access to premium voices like 'Joe'. You also have the option to create a custom voice.
Sign up free or arrange a demo with our team to learn more.
* Compared to non-SSML text-to-speech.