Machine Learning

Medical Text Corpora

Each day there are at least 10,000 updates on the archive of biomedical literature. Being able to absorb all of this information is impossible for an expert in one discipline, which is why we are working on extracting knowledge from medical literature using machine learning.

Word Embeddings

We are working on Skip-gram model for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships.

Statistical Learning

Topic Modelling

A word is worth a thousand vectors. We apply Vector based toolkits to medical corpora to test its potential for improving the accessibility of medical knowledge. The end objective is to draw previously unseen correlations and create & test hypotheses astonishingly quickly.

Recurrent Neural Networks

We train a RNN-based model to mimic a complex set of morphological and syntactic transformations applied by a state-of-the-art rule-based system, and generalize better than the rule-based system on concepts not present during training time.

Recursive Neural Networks

Similar to how recurrent neural networks are deep in time, recursive neural networks are deep in structure, because of the repeated application of recursive connections.

Helping you to find the Science you need!

We think that if we could only read and understand all of the scientific data humans have created, not to mention connecting the dots in that data, we’d have solutions to a number of pressing problems already!