Apple is training Siri to better understand people with speech impairments

According to the Wall Street Journal, Apple is training Siri to better recognize the speech of people with disabilities, such as stuttering. One more gesture in the line of accessibility and inclusion that, in many things, make Apple platforms an example in the industry .

28,000 audio clips extracted from podcasts

According to the publication Apple has built a database that contains 28,000 pieces of audio , properly labeled, cleaned and ready to train the assistant with them. This data has been collected from Podcasts in which people with some kind of speech impairment participate.

According to an Apple spokesperson, this training of the assistant will allow Siri to more accurately recognize the requests made. An increase in voice recognition that is accompanied by the “Hold to speak” function that allows us to hold the Siri button on our devices during a long request or with pauses to prevent a certain silence from letting the assistant understand that we have finished to speak.

A function that was implemented in 2015 that really facilitates the use of the assistant in certain situations and shows the company’s special interest in facilitating access to devices and services for all users equally.

A database of 28,000 samples with which to train Siri to better recognize all types of speech.

Apple will publish the Siri improvement plan procedures in a research paper in a few days and will detail the processes the company is carrying out to improve the assistant. A publication that will help other teams in the sector to improve the understanding capacity of other attendees.

 

by Abdullah Sam
I’m a teacher, researcher and writer. I write about study subjects to improve the learning of college and university students. I write top Quality study notes Mostly, Tech, Games, Education, And Solutions/Tips and Tricks. I am a person who helps students to acquire knowledge, competence or virtue.

Leave a Comment