When Roger Ebert lost his lower jaw—and, thus, his voice—to cancer, the text-​​to-​​speech com­pany Cere­Proc cre­ated a syn­thetic voice that would be custom-​​made for the film critic. The com­put­er­ized voice, a fusion of the words Ebert had recorded in his long career, would not sound fully nat­ural; it would, how­ever, sound dis­tinc­tive. It was meant to help Ebert regain some­thing he had lost with the removal of his vocal cords: a voice of his own.

Most people are not so lucky. Those who have had strokes—or who live with ail­ments like Parkinson’s or cere­bral palsy—often rely on ver­sions of syn­thetic voices that are com­pletely generic in their delivery. (Think of Stephen Hawking’s com­put­er­ized monotone. Or of Alex, the voice of Apple’s VoiceOver soft­ware.) The good news is that these people are able to be heard; the bad news is that they have still been robbed of one of the most pow­erful things a voice can give us: a unique, and audible, identity.

Up in Boston, Rupal Patel is hoping to change that. She and her col­lab­o­rator, Tim Bun­nell of Nemours AI DuPont Hos­pital for Chil­dren, have for sev­eral years been devel­oping algo­rithms that build voices for those unable to speak—without com­puter assis­tance. The voices aren’t just natural-​​sounding; they’re also unique. They’re vocal pros­thetics, essen­tially, tai­lored to the existing voices (and, more gen­er­ally, the iden­ti­ties) of their users.

Read the article at The Atlantic →