Skip to main content

Intron Health gets backing for its speech-recognition tool that recognizes African accents

Voice recognition is getting integrated in nearly all facets of modern living, but there remains a big gap: Speakers of minority languages and those with thick accents or speech disorders like stuttering are typically less able to use speech-recognition tools that control applications, transcribe or automate tasks, among other functions.

Tobi Olatunji, founder and CEO of clinical speech-recognition startup Intron Health, wants to bridge this gap. He claims that Intron is Africa’s largest clinical speech database, with its algorithm trained on 3.5 million audio clips (16,000 hours) from over 18,000 contributors, mainly healthcare practitioners, representing 29 countries and 288 accents. Olatunji says that drawing most of its contributors from the healthcare sector ensures that medical terms are pronounced and captured correctly for his target markets.

“Because we’ve already trained on many African accents, it’s very likely that the baseline performance of their access will be much better than any other service they use,” he said, adding that data from Ghana, Uganda and South Africa is growing and that the startup is confident about deploying the model there.

Olatunji’s interest in health tech stems from two strands of his experience. First, he received training and practiced as a medical doctor in Nigeria, where he saw firsthand the inefficiencies of the systems in that market, including how much paperwork needed to be filled out and how hard it was to track all of it.

“When I was a doctor in Nigeria a couple years ago, even during medical school and even now, I get irritated easily doing a repetitive task that is not deserving of human efforts,” he said. “An easy example is we had to write a patient’s name on every lab order you do. And just something that’s simple, let’s say I’m seeing the patients, and they need to get some prescriptions, they need to get some labs. I have to manually write out every order for them. It’s just frustrating for me to have to repeat the patient name over and over on each form, the age, the date, and all that. … I’m always asking, how can we do things better? How can we make life easier for doctors? Can we take some tasks away and offload them to another system so that the doctor can spend their time doing things that are very valuable?”

Those questions propelled him to the next phase of his life. Olatunji moved to the U.S. to pursue a master’s degree in medical informatics from the University of San Francisco and then another in computer science at Georgia Tech.

He then cut his teeth at a number of tech companies. As a clinical natural language programming (NLP) scientist and researcher at Enlitic, a San Francisco Bay Area company, he built models to automate the extraction of information from radiology text reports. He also served Amazon Web Services as a machine learning scientist. At both Enlitic and Amazon, he focused on natural language processing for healthcare, shaping systems that enable hospitals to run better.

admin

Author admin

More posts by admin

Leave a Reply