At The Data Incubator we run a free eight-week data science fellowship to help our Fellows land industry jobs. We love Fellows with diverse academic backgrounds that go beyond what companies traditionally think of when hiring data scientists. Aurora was a Fellow in our Spring 2016 cohort who landed a job with Verizon Wireless.

Tell us about your background. How did it set you up to be a great data scientist?

I obtained my Ph.D. in Neurobiology and Behavior from UC, Irvine in 2014. I collected data related to brain activity representing autobiographical memory using Magnetic Resonance Imaging (MRI) for my dissertation. The accurate analysis of MRI data demanded the ability to preprocess, and clean data as well as automate the processing steps using Matlab and R. Understanding how to properly use these tools was instrumental towards acquiring a new programming language (i.e. Python). Furthermore, the ability to apply statistical concepts to analyze various forms of data from diverse scenarios was highly conducive towards becoming a well-rounded data scientist who excels at analyzing novel datasets.

At The Data Incubator we run a free eight-week data science fellowship to help our Fellows land industry jobs. We love Fellows with diverse academic backgrounds that go beyond what companies traditionally think of when hiring data scientists. Wendy was a Fellow in our Winter 2017 cohort who landed a job with one of our hiring partners, Facebook.

Tell us about your background. How did it set you up to be a great data scientist?

I have a PhD in Electrical Engineering from Stanford University, where I’m currently a postdoc. My doctoral and postdoctoral research focus on the translation of novel magnetic resonance imaging (MRI) technologies to clinical neuroimaging applications, and the extraction of “hidden” imaging biomarkers from conventional clinical images. In my research, I utilized my engineering, programming, study design, and communication skills to solve interdisciplinary problems with real-world impact. I am now pivoting to data science, because I want to use my quantitative and analytical skills to discover hidden insights and guide decision-making for immediate applications in industry.

At The Data Incubator we pride ourselves on having the latest data science curriculum. Much of our course material is based on feedback from corporate and government partners about the technologies they are looking to learn. However, we wanted to develop a more data-driven approach to what we teach in our data science corporate training and our free fellowship forData science masters and PhDs looking to begin their careers in the industry.

This report is the second in a series analyzing data science related topics, to see more be sure to check out our R Packages for Machine Learning report. We thought it would be useful to the data science community to rank and analyze a variety of topics related to the profession in a simple, easy to digest cheat sheet, rankings or reports.Continue reading →

Today, we’re excited to announce that we’re teaming up with JUST Capital to help crowd-source data science for social good. The Data Incubator offers a free eight-week data science fellowship for those with a PhD or a masters degree looking to transition into data science. As a part of the application process, students are asked to submit a data science capstone project and the best students are invited to work on them during the fellowship. JUST Capital is helping providing data and project prompts to harness the collective brainpower amongst The Data Incubator fellows to solve these high-impact social problems.

These projects focus on applied data science techniques with tangible impacts on JUST Capital’s mission.

The projects are open ended and creativity is encouraged. The documents provided, below, are suitable for analysis, but one should not shy in seeking out additional sources of data.

JUST Capital is a nonprofit that provides information and rankings on how large corporations perform on issues that matter most to the public. We give individuals a voice on what really matters to them, and evaluate how companies perform on those issues. By providing the right knowledge and making it easy to access and understand, we believe capital will flow to corporations that are more JUST, ultimately leading to a balanced business world that takes into account human needs that are so often neglected today. The meaning of JUST is defined by the American public as fair, equitable and balanced. In 2016, JUST Capital surveyed nearly 4,000 Americans from all regions and walks of life, in its second annual Poll on Corporate America. The issues identified by the public form the basis of our benchmark — it is against these Drivers and Components that we measure corporate performance. The most important factors broadly relate to employees, customers, company leadership, the environment, communities and investors.

Spark

Spark is one of the most popular open-source distributed computation engines and offers a scalable, flexible framework for processing huge amounts of data efficiently. The recent 2.0 release milestone brought a number of significant improvements including DataSets, an improved version of DataFrames, more support for SparkR, and a lot more. One of the great things about Spark is that it’s relatively autonomous and doesn’t require a lot of extra infrastructure to work. While Spark’s latest release is at 2.1.0 at the time of publishing, we’ll use the example of 2.0.1 throughout this post.

Jupyter

Jupyter notebooks are an interactive way to code that can enable rapid prototyping and exploration. It essentially connects a browser-based frontend, the Jupyter Server, to an interactive REPL underneath that can process snippets of code. The advantage to the user is being able to write code in small chunks which can be run independently but share the same namespace, greatly facilitating testing or trying multiple approaches in a modular fashion. The platform supports a number of kernels (the things that actually run the code) besides the out-of-the-box Python, but connecting Jupyter to Spark is a little trickier. Enter Apache Toree, a project meant to solve this problem by acting as a middleman between a running Spark cluster and other applications.

In this post I’ll describe how we go from a clean Ubuntu installation to being able to run Spark 2.0 code on Jupyter. Continue reading →

At The Data Incubator we run a free eight-week data science fellowship to help our Fellows land industry jobs. We love Fellows with diverse academic backgrounds that go beyond what companies traditionally think of when hiring data scientists. Armand was a Fellow in our Fall 2016 cohort who landed a job with KPMG.

Tell us about your background. How did it set you up to be a great data scientist?

I received my Bachelor’s degree in Mechanical Engineering from NC State University. After college, I became a management consultant specializing in program and strategic management. As a consultant, I saw the value of data-driven decisions and extracting insights from data. As a result, I decided to go back to school to obtain my Master’s in Systems Engineering. There I was introduce to R Programming software, data mining techniques, and applications of optimization. My Masters not only exposed me to data science, but it also provided me a framework to approach complex problems.

At The Data Incubator we pride ourselves on having the latest data science curriculum. Much of our course material is based on feedback from corporate and government partners about the technologies they are looking to learn. However, we wanted to develop a more data-driven approach to what we teach in our data science corporate training and our free fellowship for

This report is the first in a series analyzing data science related topics. We thought it would be useful to the data science community to rank and analyze a variety of topics related to the profession in a simple, easy to digest cheat sheet, rankings or reports.

People interested in seeing the Broadway musical Hamilton — and there are still many of them, with demand driving starting ticket prices to $\$600$ — can enter Broadway Direct’s daily lottery. Winners can receive up to 2 tickets (out of 21 available tickets) for a total of $\$10$.

What’s the probability of winning?

How easy is it to win these coveted tickets? Members of NYC’s Data Incubator Team have collectively tried and failed 120 times. Given our data, we cannot simply divide the number of successes by the number of trials to calculate our chances of winning — we would get zero (and the odds, which are apparently small, are clearly non-zero).

This kind of situation often comes up under many guises in business and big data, and because we are a data science corporate training company, we decided to use statistics to determine the answer. Say you are measuring the click-through-rate of a piece of organic or paid content, and out of 100 impressions, you have not observed any clicks. The measured CTR is zero but the true CTR is likely not zero. Alternatively, suppose you are measuring the rate of adverse side effects of a new drug. You have tested 40 patients and haven’t found any, but you know the chance is unlikely to be zero. So what are the odds of observing a click or a side effect? Continue reading →

At The Data Incubator we run a free eight-week data science fellowship to help our Fellows land industry jobs. We love Fellows with diverse academic backgrounds that go beyond what companies traditionally think of when hiring data scientists. Bernard was a Fellow in our Fall 2016 cohort who landed a job with Uptake.

Tell us about your background. How did it set you up to be a great data scientist?

I studied Materials Science and Engineering at Northwestern University for my PhD. Graduate school prepared me with an array of technical skills including programming, statistical analysis, and the ability to build, communicate, and defend a scientific argument. These are all important in producing data science products and presenting them to those at all levels of a corporate structure.

What do you think you got out of The Data Incubator?

TDI helped me leverage my programming and critical thinking skills toward a career in data science by giving me essential skills and project experience that made me stand out from other advanced-degree STEM graduates. These include machine learning, parallel programming, and interactive data visualization. TDI also connected me to a cohort of accomplished students that has been a great support as I’ve started my career. Continue reading →

At The Data Incubator we run a free eight-week data science fellowship to help our Fellows land industry jobs. We love Fellows with diverse academic backgrounds that go beyond what companies traditionally think of when hiring data scientists. Paul was a Fellow in our Fall 2016 cohort who landed a job with Cloudera.

Tell us about your background. How did it set you up to be a great data scientist?

Following the completion of my PhD in Electrical and Computer Engineering in 2009, I joined Palantir Technologies as a Forward Deployed Engineer (client-facing software engineer). There, I helped Palantir enter a new vertical, that of Fortune 500 companies, where I built data integration and analysis software for novel commercial workflows. I left Palantir in 2012 and in 2013 I co-founded SolveBio, a genomics company whose mission is to help improve the variant-curation process; the process by which clinicians and genetic counselors research genetic mutations and label them as pathogenic, benign, or unknown. At SolveBio, my work was primarily focused on building scalable data cleansing, transformation and ingestion infrastructure that could be used to power the SolveBio genomics API. I also worked closely with geneticists and other domain experts in a semi-client-facing role.The theme of my six years as a software engineer has been to help domain experts, whether they be fraud investigators at a bank or clinicians at a hospital, analyze disparate data to make better decisions. I have built infrastructure in both Java and Python, have used large SQL and NoSQL databases, and have spent countless hours perfecting Bash hackery (or wizardry, depending on your perspective).My experiences as a software engineer were very relevant to data science in that I learned many ways to access, manipulate, and understand a variety of datasets from a variety of sources in a variety of formats. As the adage goes, “Garbage in. Garbage out.” No more is this true than in data science. Performing good data science requires cleaning and organizing data, and I feel very comfortable with this process.