Yu and colleagues managed to improve on hidden markov model (HMM) automatic speech recognition (ASR) by 20-30%, which is impressive, considering HMM has been under development for 40 years now.
FYI the main stand alone ASR program is Dragon 12, about 100$ for the deluxe which can accept your recordings. A head set is mandatory, and the Star Trek /
2001 stuff has never happened.In any case, the paper goes into the architecture and training protocol a bit. They hijacked a graphics card to do the dot products to calculate the feed-forward and error feed-back on the net.
I general, arificial neural networks (ANN) have caught on as a staple for discriminative optics measurement and other sensor devices. I am not 100% sure why, but I suspect it's not all hype. It may be that nets don't require the math horsepower needed for more rigorous approaches.
Take a semi-trailer backing up to a loading dock as an example: That can be coded in a day with ANN. But could you do it mathematically?
The faults of ANN include the fact that it doesn't tell you HOW it made the decision, since its a black-box device. Coders "dissect" the net after training to get clues.