Talk about your time savers. New research from Microsoft(s msft) and the University of Toronto may make it possible for non-Chinese speakers to “speak” the language in their own voices without having to learn the language. Given the trade relationships between the US and China, this could be a really big deal if it works as advertised.
While great strides have been made in speech recognition over the past decades, the current systems still carry word error rates of 20 percent to 25 percent when handling “arbitrary speech,” Microsoft’s Richard Rashid wrote in a blog post. (Do you hear that Siri(s aapl)?)
But now, new technology called Deep Neural Networks, which mimics the way the human brain operates, enables much more discriminating speech recognition, according to Rashid, Microsoft’s chief research officer.
Rashid, who demonstrated the technology at a Microsoft conference in Tianjin, China in late October said the process takes…
View original post 145 more words