A Medium article that covers the topic of visualizing high-dimensional data using PCA and t-SNE in Python [26/11/2018]

Google AI open-sources BERT- A Pretraining model for NLP tasks. This mainly addresses domains that have a small amount of labeled data like sentiment analysis and Question-answering. BERT is trained to produce generalized language model representation which we then train on our own tasks, now requiring much less amount of labeled data.[article][code][02/11/2018]

OpenAI develops Random Network Distillation for Reinforcement Learning, which now uses curiosity to train, instead of a future reward function or without any given objective function.[article][code] [02/11/2018]

An interesting news article on how the future of Healthcare looks with AI.[26/10/2018]

GoogleAI Research introduces FluidAnnotation which makes image annotations faster and easier. Previously, an annotator used to carefully click on the boundaries to outline each object in the image, which is tedious. This makes it easier to annotate object boundaries. [Demo][Article][23/10/2018]

Follow the projects of the following MIT Media Lab research groups[17/10/2018] :

Scalable cooperation – Re-imagining human cooperation in the age of social media and artificial intelligence

Google’s AutoML – a new direction where 100x computational power is estimated to replace machine learning expertise. The Tree-Based Pipeline Optimization Tool (TPOT) was one of the very first AutoML methods and open-source software packages developed for the data science community [23/09/2018]

If you are starting to write your first research paper and are seriously struggling, you can follow this self-help series by Prof. Jari Saramäki. I found it very useful and am still going through it. [22/07/2018]