Link to alynovation main page


Dysarthric speech is unintelligible or unnatural flow of speech commonly caused by motor, speech, and language disorders that affect the body’s resources responsible for producing speech, such as the tongue and vocal cords. There are several patterns of dysarthric speech but characteristics differ from person to person. People with dysarthric speech can use alternative and augmented communication methods like switches and communication boards, but they aren't optimal solutions because the process is very repetitive and too slow to preserve the natural flow of conversation, and user's communication is limited to predetermined content, and because. In cases of neuromuscular conditions, many people with dysarthric speech also have severely limited mobility, requiring 24-hour caregiver support to control their environment for everything from wheelchair transfers or turning the lights on and off. While virtual assistants and voice recognition technologies could offer hope of independence by activating connected devices to bridge the physical gap, these solutions are notoriously intolerant of any atypical speech. As there is no “typical” pattern of dysarthric speech, current algorithms can’t simply be taught to be more inclusive.

Voiceitt learns to recognize user’s unique speech patterns, including elements such as breathing pauses and non-verbal sounds, and interprets it into intelligible speech or text in real time. Its first product is a mobile app, free on the App Store, that is tailored to the speaker and works in any language, including Hebrew and Arabic. Voiceitt can also facilitate spontaneous interactions and independence in their everyday lives, whether it’s by implementing specialized vocabulary to fit their professional environment or by inputting their coffee order. Voiceitt has also been integrated with Alexa, Amazon's smart home virtual assistant, so people with a wider range of physical abilities can use connected devices to control their own environment using voice activation. The algorithm adapts commands to the user’s ability and learns to recognize when the wake word is spoken, even if it doesn’t sound coherent or even similar to “Alexa.” This eases the burden on the family and care team by empowering the user with the tools to independently perform numerous tasks using the same voice that has previously constituted a barrier to communication and a source of frustration. This flexibility also means that Alexa can be used by speakers with any accent and in any language, even if Alexa doesn't support it yet, making virtual assistants more accessible to people all over the world with both atypical and typical speech. Voiceitt’s most recent development milestone was the 2022 exclusive beta release of their next-generation continuous speech functionality that allows for the spontaneous composition of messages and documents, using an enhanced algorithm enabled by the submission of thousands of user recordings to vastly expand their database.