Pandas on HDFS with Dask Dataframes

In this post we use Pandas in parallel across an HDFS cluster to read CSV data.
We coordinate these computations with dask.dataframe. A screencast version of
this blogpost is available here
and the previous post in this series is available
here.

To start, we connect to our scheduler, import the hdfs module from the
distributed library, and read our CSV data from HDFS.

Our data comes from the New York City Taxi and Limousine Commission which
publishes all yellow cab taxi rides in
NYC for various
years. This is a nice model dataset for computational tabular data because
it’s large enough to be annoying while also deep enough to be broadly
appealing. Each year is about 25GB on disk and about 60GB in memory as a
Pandas DataFrame.

HDFS breaks up our CSV files into 128MB chunks on various hard drives spread
throughout the cluster. The dask.distributed workers each read the chunks of
bytes local to them and call the pandas.read_csv function on these bytes,
producing 391 separate Pandas DataFrame objects spread throughout the memory of
our eight worker nodes. The returned objects, nyc2014 and nyc2015, are
dask.dataframe objects which
present a subset of the Pandas API to the user, but farm out all of the work to
the many Pandas dataframes they control across the network.

Play with Distributed Data

If we wait for the data to load fully into memory then we can perform
pandas-style analysis at interactive speeds.

>>>nyc2015.head()

VendorID

tpep_pickup_datetime

tpep_dropoff_datetime

passenger_count

trip_distance

pickup_longitude

pickup_latitude

RateCodeID

store_and_fwd_flag

dropoff_longitude

dropoff_latitude

payment_type

fare_amount

extra

mta_tax

tip_amount

tolls_amount

improvement_surcharge

total_amount

0

2

2015-01-15 19:05:39

2015-01-15 19:23:42

1

1.59

-73.993896

40.750111

1

N

-73.974785

40.750618

1

12.0

1.0

0.5

3.25

0

0.3

17.05

1

1

2015-01-10 20:33:38

2015-01-10 20:53:28

1

3.30

-74.001648

40.724243

1

N

-73.994415

40.759109

1

14.5

0.5

0.5

2.00

0

0.3

17.80

2

1

2015-01-10 20:33:38

2015-01-10 20:43:41

1

1.80

-73.963341

40.802788

1

N

-73.951820

40.824413

2

9.5

0.5

0.5

0.00

0

0.3

10.80

3

1

2015-01-10 20:33:39

2015-01-10 20:35:31

1

0.50

-74.009087

40.713818

1

N

-74.004326

40.719986

2

3.5

0.5

0.5

0.00

0

0.3

4.80

4

1

2015-01-10 20:33:39

2015-01-10 20:52:58

1

3.00

-73.971176

40.762428

1

N

-74.004181

40.742653

2

15.0

0.5

0.5

0.00

0

0.3

16.30

>>>len(nyc2014)165114373>>>len(nyc2015)146112989

Interestingly it appears that the NYC cab industry has contracted a bit in the
last year. There are fewer cab rides in 2015 than in 2014.

When we ask for something like the length of the full dask.dataframe we
actually ask for the length of all of the hundreds of Pandas dataframes and
then sum them up. This process of reaching out to all of the workers completes
in around 200-300 ms, which is generally fast enough to feel snappy in an
interactive session.

The dask.dataframe API looks just like the Pandas API, except that we call
.compute() when we want an actual result.

We didn’t have to find columns or specify data-types. We didn’t have to parse
each value with an int or float function as appropriate. We didn’t have to
parse the datetimes, but instead just specified a parse_datetimes= keyword.
The CSV parsing happened about as quickly as can be expected for this format,
clocking in at a network total of a bit under 1 GB/s.

Pandas is well loved because it removes all of these little hurdles from the
life of the analyst. If we tried to reinvent a new
“Big-Data-Frame” we would have to reimplement all of the work already well done
inside of Pandas. Instead, dask.dataframe just coordinates and reuses the code
within the Pandas library. It is successful largely due to work from core
Pandas developers, notably Masaaki Horikoshi
(@sinhrks), who have done tremendous work to
align the API precisely with the Pandas core library.

Analyze Tips and Payment Types

In an effort to demonstrate the abilities of dask.dataframe we ask a simple
question of our data, “how do New Yorkers tip?”. The 2015 NYCTaxi data is
quite good about breaking down the total cost of each ride into the fare
amount, tip amount, and various taxes and fees. In particular this lets us
measure the percentage that each rider decided to pay in tip.

>>>nyc2015[['fare_amount','tip_amount','payment_type']].head()

fare_amount

tip_amount

payment_type

0

12.0

3.25

1

1

14.5

2.00

1

2

9.5

0.00

2

3

3.5

0.00

2

4

15.0

0.00

2

In the first two lines we see evidence supporting the 15-20% tip standard
common in the US. The following three lines interestingly show zero tip.
Judging only by these first five lines (a very small sample) we see a strong
correlation here with the payment type. We analyze this a bit more by counting
occurrences in the payment_type column both for the full dataset, and
filtered by zero tip:

We find that almost all zero-tip rides correspond to payment type 2, and that
almost all payment type 2 rides don’t tip. My un-scientific hypothesis here is
payment type 2 corresponds to cash fares and that we’re observing a tendancy of
drivers not to record cash tips. However we would need more domain knowledge
about our data to actually make this claim with any degree of authority.

Analyze Tips Fractions

Lets make a new column, tip_fraction, and then look at the average of this
column grouped by day of week and grouped by hour of day.

First, we need to filter out bad rows, both rows with this odd payment type,
and rows with zero fare (there are a surprising number of free cab rides in
NYC.) Second we create a new column equal to the ratio of tip_amount /
fare_amount.

Next we choose to groupby the pickup datetime column in order to see how the
average tip fraction changes by day of week and by hour. The groupby and
datetime handling of Pandas makes these operations trivial.

This head computation is about as fast as a film projector. You could perform
this roundtrip computation between every consecutive frame of a movie; to a
human eye this appears fluid. In the last post
we asked about how low we could bring latency. In that post we were running
computations from my laptop in California and so were bound by transcontinental
latencies of 200ms. This time, because we’re operating from the cluster, we
can get down to 20ms. We’re only able to be this fast because we touch only a
single data element, the first partition. Things change when we need to touch
the entire dataset.

The length computation takes 200-300 ms. This computation takes longer because we
touch every individual partition of the data, of which there are 178. The
scheduler incurs about 1ms of overhead per task, add a bit of latency
and you get the ~200ms total. This means that the scheduler will likely be the
bottleneck whenever computations are very fast, such as is the case for
computing len. Really, this is good news; it means that by improving the
scheduler we can reduce these durations even further.

If you look at the groupby computations above you can add the numbers in the
progress bars to show that we computed around 3000 tasks in around 7s. It
looks like this computation is about half scheduler overhead and about half
bound by actual computation.

Conclusion

We used dask+distributed on a cluster to read CSV data from HDFS
into a dask dataframe. We then used dask.dataframe, which looks identical to
the Pandas dataframe, to manipulate our distributed dataset intuitively and
efficiently.

We looked a bit at the performance characteristics of simple computations.

What doesn’t work

As always I’ll have a section like this that honestly says what doesn’t work
well and what I would have done with more time.

Dask dataframe implements a commonly used subset of Pandas functionality,
not all of it. It’s surprisingly hard to communicate the exact bounds of
this subset to users. Notably, in the distributed setting we don’t have a
shuffle algorithm, so groupby(...).apply(...) and some joins are not
yet possible.

If you want to use threads, you’ll need Pandas 0.18.0 which, at the time of
this writing, was still in release candidate stage. This Pandas release
fixes some important GIL related issues.

The 1ms overhead per task limit is significant. While we can still scale
out to clusters far larger than what we have here, we probably won’t be
able to strongly accelerate very quick operations until we reduce this
number.

We use the hdfs3 library to read
data from HDFS. This library seems to work great but is new and could use
more active users to flush out bug reports.