Tag Archives: SQL

At Springboard, we recently sat down with Michael Beaumier, a data scientist at Google, to discuss his transition into the field, what the interview process is like, the future of data wrangling, and the advice he has for aspiring data professionals.

The full video Q&A is below, but here are some of the highlights.

You started off with a Ph.D. in physics and now you’re a data scientist. How did that transition happen?

I think in physics one of the things that attracted me most to the field that I studied, which was particle physics, was the ability to leverage computer science mathematical modeling and data visualization to solve big questions. The longer I was in that field, the more I realized that that process of problem-solving with those techniques and tools, I actually enjoyed that more than the physics itself. So it was a natural transition for me to move to a career that felt like it had higher impact, a wide variety of work, and good career opportunity—and data science was that was that move for me.

Before Google, you worked at Mercedes. What were some of the projects that you worked on there?

The big problem that we were solving at Mercedes is basically: how do you build machine learning into an environment where there’s limited-to-no data connectivity? One of those pieces was: how do you do that in a car interface with a very low-power computer like a Raspberry Pi and basically create an interface that can customize itself to the users’ needs based on context? This was released as MBUX in 2017 and that was a pretty cool project because there were a lot of challenges that you wouldn’t normally have to solve if you had access to a massive data set or connectivity.

We pulled questions from some of our Springboard students and this one is from Miguel, who is in our data science vertical. He asked, “Do you have a process for thinking about the data before you start implementing the tools?”

That’s a great question and I think I would add another piece to that, which is that it kind of depends on where you’re at. The Mercedes answer would be that you have to build the tools, and usually the way that you approach that is you think about the data you have and what kind of features you have. And specifically, where is the information hiding that sort of links your input to output? And what kind of transformation do you need to apply to the data set to enable modeling? That’s what I would say.

The key is to think about—do all your work up front, I would say, because models are kind of a dime a dozen, with scikit-learn you can just like, it’s the same API so you can just model.fit a thousand times and then combine the outputs and then do a model on top of that. But all the work now, I think, is: how do you think about engineering features and building the model into a pipeline?

Another one of our data science students wants to know a little bit more about your interview process at Google. What was that like and how did you search for your job?

I actually interviewed at Google twice. The first time I failed utterly and the second time I got through. There are many paths into interviewing at Google and I think a pretty common one, if you don’t know anyone at Google, is that a recruiter reaches out to you—once you have a job and that’s all posted on LinkedIn and your LinkedIn profile looks pretty good, you’re gonna get contacted by Facebook, Amazon, Google, Netflix. Like, they’re gonna talk to you. So when you take the recruiter intake process, usually what they will do is they’ll kind of hold your hand through the whole process. The first piece is a technical screen. If the recruiter feels like that’s unnecessary, they may move you directly to an onsite.

I’ve interviewed with a lot of companies and one of the things that I always do is I just ask the recruiter: what are the metrics that you will be evaluating me on in these interviews and what topics will be covered? I think some people feel like that’s off-limits, but, you know, this isn’t school, you’re allowed to ask what’s on the test before you get the test, so I encourage people to do that. it’s not cheating.

I think a lot of people, when they interview, they get really fixated on “I got to get to the right answer” and maybe freeze because they get really nervous. And these are totally natural reactions. I would say with the interview process, if you think of it more of a performance, you’ll have a much better time, you’ll probably enjoy it more, and I think the outcome will be better.

We also have a question from Guy, who is one of our community managers. He wanted to know, “How do you develop a knack for data wrangling? Do you have a process for that?”

Honestly, I think that data wrangling is going to be one of the things that is replaced by modeling. There’s a whole field called data learning, where basically a model tries to learn—you have a data set and it may be super messy and you just throw that into a data learning model and the model figures out what’s important. OK, but that’s not the question. The question is: how do you develop an intuition for data?

So, it’s tricky because sometimes it, I would say it depends on the size of the data set. When the data set becomes very large in terms of the number of observations or features… you know, I work on projects that regularly have tens of thousands of features and so that just can’t fit into your head as a human being, so I think the only way in that space that you can develop an intuition for data is actually from a modeling perspective. So, you try models, you understand what over and underfitting is, and how to tune your model…

In the case where you have a smaller number of features, I think the best way to build an intuition for that is to just try it. There’s this data set that I send out to people that’s from the UCI Machine Learning Repository. It’s the Auto MPG Data Set and I love it because it has a mix of categorical features. It has numeric features that are really actually categorical features and then the goal is: can you model the miles per gallon of these cars? It’s so rich and it’s so small and there’s only like 300 observations and like eight features, but you can do a lot with it. And so I think the crucial piece to developing intuition is to practice. And if you’re not practicing, you can’t develop that intuition.

Just to kind of like put the nail in that, you need to be comfortable with handling categorical features, you need to understand how to turn numerical features into categorical features, you need to understand how to treat numerical features, you need to understand what normalization is and what models are robust to feature normalization and what models it doesn’t matter for. And there’s no easy way, I would just say: try it. And when people ask me, like, help me prepare for data science interviews, I just tell them to do data challenges over and over again and then come talk to me and tell me how you feel about the static data challenges.

We have one last question from a student, Cory. He asks, “How important is SQL in comparison to Python in 2019?”

That’s an interesting question because I think people love putting SQL on this mega pedestal as like a core thing that all data scientists have to use. My opinion on it is that if your data set fits into memory, it’s like it literally doesn’t matter what you do because computers are fast and human brains are slow and you’re not going to notice the difference between doing something in a pandas dataframe versus an R data frame versus SQL.

However, you have to know SQL. Like, the universe has decided that in data science interviews, you must know SQL. So, the question is: what’s important in 2019? I would say if your data set doesn’t fit into memory, then you have to use SQL and if it does, it doesn’t matter—and there’s like a million caveats to that statement.

This Q&A has been lightly edited and condensed for clarity.

Ready to start (or grow) your data science career? Check out Springboard’s Data Science Career Track—you’ll learn the skills and get the personalized guidance you need to land the job you want.

Like this:

This is a guest post from Michael Li of The Data Incubator. The The Data Incubator runs a free eight week data science fellowship to help transition their Fellows from Academia to Industry. This post runs through some of the toolsets you’ll need to know to kickstart your Data Science Career.

If you’re an aspiring data scientist but still processing your data in Excel, you might want to upgrade your toolset. Why? Firstly, while advanced features like Excel Pivot tables can do a lot, they don’t offer nearly the flexibility, control, and power of tools like SQL, or their functional equivalents in Python (Pandas) or R (Dataframes). Also, Excel has low size limits, making it suitable for “small data”, not “big data.”

In this blog entry we’ll talk about SQL. This should cover your “medium data” needs, which we’ll define as the next level of data where the rows do not fit the 1 million row restriction in Excel. SQL stores data in tables, which you can think of as a spreadsheet layout but with more structure. Each row represents a specific record, (e.g. an employee at your company) and each column of a table corresponds to an attribute (e.g. name, department id, salary). Critically, each column must be of the same “type”. Here is a sample of the table Employees:

EmployeeId

Name

StartYear

Salary

DepartmentId

1

Bob

2001

10.5

10

2

Sally

2004

20

10

3

Alice

2005

25

20

4

Fred

2004

12.5

20

SQL has many keywords which compose its query language but the ones most relevant to data scientists are SELECT, WHERE, GROUP BY, JOIN. We’ll go through these each individually.

SELECT

SELECT is the foundational keyword in SQL. SELECT can also filter on columns. For example

SELECT Name, StartYear FROM Employees

returns

Name

StartYear

Bob

2001

Sally

2004

Alice

2005

Fred

2004

WHERE

The WHERE clause filters the rows. For example

SELECT * FROM Employees WHERE StartYear=2004

returns

EmployeeId

Name

StartYear

Salary

DepartmentId

2

Sally

2004

20

10

4

Fred

2004

12.5

20

GROUP BY

Next, the GROUP BY clause allows for combining rows using different functions like COUNT (count) and AVG (average). For example,

SELECT StartYear, COUNT(*) as Num, AVG(Salary) as AvgSalary
FROM EMPLOYEES
GROUP BY StartYear

returns

StartYear

Num

AvgSalary

2001

1

10.5

2004

2

16.25

2005

1

25

JOIN

Finally, the JOIN clause allows us to join in other tables. For example, assume we have a table called Departments:

DepartmentId

DepartmentName

10

Sales

20

Engineering

We could use JOIN to combine the Employees and Departments tables based ON the DepartmentId fields:

We’ve ignored a lot of details about joins: e.g. there are actually (at least) 4 types of joins, but hopefully this gives you a good picture.

Conclusion and Further Reading

With these basic commands, you can get a lot of basic data processing done. Don’t forget, that you can nest queries and create really complicated joins. It’s a lot more powerful than Excel, and gives you much better control of your data. Of course, there’s a lot more to SQL than what we’ve mentioned and this is only intended to wet your appetite and give you a taste of what you’re missing.

And when you’re ready to step it up from “medium data” to “big data”, you should apply for a fellowship at The Data Incubator where we work with current-generation data-processing technologies like MapReduce and Spark!

This is one of the best SQL tutorials I have seen. Plus, it has the huge added advantage of not requiring you to setup your own database first (the data is already available). Setting up your own database can be a bit overwhelming when you are first learning. So, if you are looking to learn SQL, now is a great time to start.