According to the artificial intelligence researcher community, Apple has formed its own speech recognition team which could debut an enhanced Siri, powered by neural nets, reports Wired.

The story has Canadian roots, as the idea of neural-nets for speech recognition—a concept known as deep learning—was long supported by University of Toronto’s Geoff Hinton. The latter was the world’s top deep learning expert who was invited to speak at a Microsoft-funded conference in Whistler back in 2009.

Despite scepticism of deep learning at the time, the two Microsoft engineers who invited Hinton to speak on the subject ended up working together to run experiments with real data. The results? The use of deep learning and neural-nets caused a 25 percent jump in accuracy—a jaw-dropping number, as the speech recognition field normally sees 5 percent as game-changing. The engineers published their results and that set off the world to race to create neural-network algorithms.

These algorithms are now being used by Google’s Android voice recognition, IBM and Microsoft’s real-time Skype Translate to name a few. But one company set to jump on board soon? Apple.

But those in the tight-knit community of artificial intelligence researchers believe this is about to change. It’s clear, they say, that Apple has formed its own speech recognition team and that a neural-net-boosted Siri is on the way.

Apple’s Siri Manager, speech researcher Gunnar Evermann, was hired away from Nuance. Arnab Ghoshal, a researcher from the University of Edinburgh, is another recent speech hire by the Cupertino company.

Senior director in Apple’s Siri group, Alex Acero, was a 20 year speech technology research veteran previously from Microsoft. He was the manager of the two Microsoft managers who invited Hinton to that conference in Whistler five years ago to talk about deep learning.

“Apple is not hiring only in the managerial level, but hiring also people on the team-leading level and the researcher level,” says Abdel-rahman Mohamed, a postdoctoral researcher at the University of Toronto, who was courted by Apple. “They’re building a very strong team for speech recognition research.”

Microsoft’s Peter Lee, who attended that Whistler conference in 2009 and published the research done with Hinton’s neural-net algorithms, believes Apple only needs six months to implement neural nets to make Siri even more powerful, eventually catching up to Microsoft and Google.

“All of the major players have switched over except for Apple Siri,” he says. “I think it’s just a matter of time.”

Siri is great when it’s working but definitely needs improvement in recognizing speech and also speed (Google Now is much faster). Sounds like Apple has some major improvements in store for Siri, which many users will appreciate.