GoodPaper

The techniques are ingenious in how they work – try them yourself. Goutam Nair, IIIT-Hyderabad. What is Topic Modeling? Why do we need it? Large amounts of data are collected everyday. Topic modelling provides us with methods to organize, understand and summarize large collections of textual information. Discovering hidden topical patterns that are present across the collectionAnnotating documents according to these topicsUsing these annotations to organize, search and summarize texts Topic modelling can be described as a method for finding a group of words (i.e topic) from a collection of documents that best represents the information in the collection.

It's complex, redundant, and confusing. There are too many layers in the technology stack, too many standards, and too many engines. Vendors? Too many. What is the user to do? By Andrew Brust, Datameer.
Learn Python & Swift 3 from ZERO to HERO by Leo Trieu. About this project Risks and challenges Creating course content, recording, and editing videos is time consuming but the risks here are low.

I've been doing this for courses at Code4Startup so I understand the potential challenges as well as the time commitment.
Top 10 IPython Tutorials for Data Science and Machine Learning. Figuring Out the Algorithms of Intelligence. Marvin Minsky, the father of AI, passed away this year.

One of his inventions was the confocal microscope, which we used to take this high-resolution picture of a live brain circuit. Something in these cells allows them to automatically identify useful connections and establish useful networks out of information. By Nathan R. Wilson, Ph.D., Nara Logics. Data science, and knowledge discovery, are among the most “brain-like” operations that a company does, and its practitioners have a unique vantage point into the utility of artificial intelligence. In Search of the Master Algorithm Is there a general “process” by which data can be turned into knowledge, or a “rule” for learning rules? Inspired by Biology – Data Storage Will Start to Reflect the Natural World In the future, growing data trees of associations will be increasingly fused (like the unified “data lakes” that are evolving in advanced organizations). Role of the Data Scientist: Pathways not Manual Updates Bio: Nathan R.

Related:
43 New External Machine Learning Resources and Updated Articles. WIRED. The Truth About Deep Learning - Quantified. Come on people — let’s get our act together on deep learning.

I’ve been studying and writing about DL for close to two years now, and it still amazes the misinformation surrounding this relatively complex learning algorithm. This post is not about how deep learning is or is not over-hyped, as that is a well documented debate. Rather, it’s a jumping off point for a (hopefully) fresh, concise understanding of deep learning and its implications. This discussion/rant is somewhat off the cuff, but the whole point was to encourage those of us in the machine learning community to think clearly about deep learning. Let’s be bold and try to make some claims based on actual science about whether or not this technology will or will not produce artificial intelligence. [Note added 05/05/16: Keep in mind that this is a blog post, not an academic paper. The Problem Even the most academic among us mistakenly merge two very different schools of thought in our discussions on deep learning: The Answer (?)

I Thought Of Sharing These 7 Machine Learning Concepts With You - HPC ASIA. It’s hard not to be fascinated by machine learning.

They’re overwhelmingly powerful. Even Steve Ballmer says machine learning will be the next era of computer science. There is lot of buzz around machine learning now a days. So I thought of discussing some simple machine learning concepts to motivate you . so lets get started, 1) Ensemble Methods. Dealing with Unbalanced Classes, SVMs, Random Forests, and Decision Trees in Python. An overview of dealing with unbalanced classes, and implementing SVMs, Random Forests, and Decision Trees in Python.

By Manu Jeevan, Big Data Examiner. So far I have talked about decision trees and ensembles. But I hope, I have made you understand the logic behind these concepts without getting too much into the mathematical details. In this post lets get into action, I will be implementing the concepts that we learned in these two blog posts.
Introducing DeepText: Facebook's text understanding engine. Text is a prevalent form of communication on Facebook.

Understanding the various ways text is used on Facebook can help us improve people's experiences with our products, whether we're surfacing more of the content that people want to see or filtering out undesirable content like spam. With this goal in mind, we built DeepText, a deep learning-based text understanding engine that can understand with near-human accuracy the textual content of several thousands posts per second, spanning more than 20 languages. DeepText leverages several deep neural network architectures, including convolutional and recurrent neural nets, and can perform word-level and character-level based learning.

We use FbLearner Flow and Torch for model training. Trained models are served with a click of a button through the FBLearner Predictor platform, which provides a scalable and reliable model distribution infrastructure.