Artificial intelligence (AI) can help improve the ability of brain imaging techniques to predict Alzheimer’s disease early, according to a study.

Researchers from the University of California in San Francisco (UCSF) trained a deep learning algorithm on a special imaging technology known as 18-F-fluorodeoxyglucose positron emission tomography (FDG-PET).

They included more than 2,100 FDG-PET brain images from 1,002 patients and on an independent set of 40 imaging exams from 40 patients.

The results showed that the algorithm was able to teach itself metabolic patterns that corresponded to Alzheimer’s disease.

It also achieved 100 per cent sensitivity at detecting the disease an average of more than six years prior to the final diagnosis.

“We were very pleased with the algorithm’s performance. It was able to predict every single case that advanced to Alzheimer’s disease,” said Jae Ho Sohn, from UCSF’s Radiology and Biomedical Imaging Department.

“If FDG-PET with AI can predict Alzheimer’s disease this early, beta-amyloid plaque and tau protein PET imaging can possibly add another dimension of important predictive power,” he added, in the paper detailed in the journal Radiology.

While early diagnosis of Alzheimer’s is extremely important for the treatment, it has proven to be challenging.

The results showed that the algorithm was able to teach itself metabolic patterns that corresponded to Alzheimer’s disease, Pixabay

Although the cause behind the progressive brain disorder remains unconfimed yet, various research has linked the disease process to changes in metabolism, as shown by glucose uptake in certain regions of the brain.

These changes can be difficult to recognise.

“If we diagnose Alzheimer’s disease when all the symptoms have manifested, the brain volume loss is so significant that it’s too late to intervene,” Sohn said.

“If we can detect it earlier, that’s an opportunity for investigators to potentially find better ways to slow down or even halt the disease process,” he noted.

Sohn explained that the algorithm could be a useful tool to complement the work of radiologists — especially in conjunction with other biochemical and imaging tests — in providing an opportunity for early therapeutic intervention. (IANS)

The Google name is displayed outside the company's office in London, Britain. VOA

Google adding an Artificially Intelligent (AI) offline dictation feature on its Gboard keyboard for Pixel phones that would allow users to speak out emails and texts even without an Internet connection.

“We’re happy to announce the rollout of an end-to-end, all-neural, on-device speech recognizer to power speech input in Gboard which is always available, even when you are offline,” Johan Schalkwyk, Speech Team, Google wrote in a blog post on Tuesday.

Google has designed the feature to work at the character level.

Google on an Android device. Pixabay

“As you speak, it outputs words character-by-character, just as if someone was typing out what you say in real-time. It is exactly as you’d expect from a keyboard dictation system,” Schalkwyk said.

To increase use-parameters of the speech recognition feature, Google said it has hosted the new model on device in order to avoid the latency and inherent unreliability of communication networks.