optimize queries like - count distinct users for each gender

Details

Type: Improvement

Status:Open

Priority: Major

Resolution:
Unresolved

Affects Version/s:0.9.0

Fix Version/s:
None

Component/s:
None

Labels:

None

Description

The pig group operation does not usually have to deal with skew on the group-by keys if the foreach statement that works on the results of group has only algebraic functions on the bags. But for some queries like the following, skew can be a problem -

Since there are only 2 distinct values of the group-by key, only 2 reducers will actually get used in current implementation. ie, you can't get better performance by adding more reducers.
Similar problem is there when the data is skewed on the group key. With current implementation, another problem is that pig and MR has to deal with records with extremely large bags that have the large number of distinct user names, which results in high memory utilization and having to spill the bags to disk.

The query plan should be modified to handle the skew in such cases and make use of more reducers.

Activity

One way to mitigate the problem of skew in above above example query is to add another group-by statement which uses both gender and user as group-by key, and does a partial aggregation. It will introduce and additional MR job. The 2nd MR job will be effectively using only 2 reducers, but the work that needs to be done in the reduce of the 2nd MR job will be very little.

USER_DATA = load 'file' as (USER, GENDER, AGE);
USER_GROUP_GENDER_PART = group USER_DATA by (GENDER, USER) parallel 100;
-- there is only one distinct user per row since the USER column is one of group-by colums, so just project 1 as count
DIST_USER_PER_GENDER_PART = foreach USER_GROUP_GENDER_PART generate group.GENDER as GENDER, 1 as USER_COUNT;
USER_GROUP_GENDER = group DIST_USER_PER_GENDER_PART by GENDER;
-- map-side combiner will do most of the work in parallel, reduce will need to process few small records
DIST_USER_PER_GENDER = foreach USER_GROUP_GENDER generate GENDER, SUM(USER_GROUP_GENDER.USER_COUNT);

Thejas M Nair
added a comment - 08/Feb/11 19:16 One way to mitigate the problem of skew in above above example query is to add another group-by statement which uses both gender and user as group-by key, and does a partial aggregation. It will introduce and additional MR job. The 2nd MR job will be effectively using only 2 reducers, but the work that needs to be done in the reduce of the 2nd MR job will be very little.
USER_DATA = load 'file' as (USER, GENDER, AGE);
USER_GROUP_GENDER_PART = group USER_DATA by (GENDER, USER) parallel 100;
-- there is only one distinct user per row since the USER column is one of group-by colums, so just project 1 as count
DIST_USER_PER_GENDER_PART = foreach USER_GROUP_GENDER_PART generate group.GENDER as GENDER, 1 as USER_COUNT;
USER_GROUP_GENDER = group DIST_USER_PER_GENDER_PART by GENDER;
-- map-side combiner will do most of the work in parallel, reduce will need to process few small records
DIST_USER_PER_GENDER = foreach USER_GROUP_GENDER generate GENDER, SUM(USER_GROUP_GENDER.USER_COUNT);

The DISTINCT optimization is often not applicable; consider, for example, a script that takes all pages on a website and generates COUNT(impressions), COUNT(distinct users). Doing the distinct operation first means we can no longer do COUNT(impressions).

An algebraic function applied to non-distinct bags can be decomposed in this case as follows:

The DISTINCT optimization is often not applicable; consider, for example, a script that takes all pages on a website and generates COUNT(impressions), COUNT(distinct users). Doing the distinct operation first means we can no longer do COUNT(impressions).

Yes, that optimization will not be applicable for this use case.

The translation you proposed helps to distribute the work of computing ALGFUNC(in.c4) across multiple tasks (even when there is skew on c1,c2). But FUNC(res_dist.c3) will still get computed in reduce side (ie, all records for a value of c1,c2 will go to one reduce), as combiner will not get used. This is because ALGFUNC$Final is not algebraic.

One cumbersome workaround for user is to write a new udf ALGFUNC_2 which is same as ALGFUNC, except for having ALGFUNC_2$Initial same as ALGFUNC$Intermed . This ALGFUNC_2 then gets used in the last foreach .
Pig can automate this logic, and use combiner for the last foreach in above examples translation.

Thejas M Nair
added a comment - 10/Jun/11 15:50 The DISTINCT optimization is often not applicable; consider, for example, a script that takes all pages on a website and generates COUNT(impressions), COUNT(distinct users). Doing the distinct operation first means we can no longer do COUNT(impressions).
Yes, that optimization will not be applicable for this use case.
The translation you proposed helps to distribute the work of computing ALGFUNC(in.c4) across multiple tasks (even when there is skew on c1,c2). But FUNC(res_dist.c3) will still get computed in reduce side (ie, all records for a value of c1,c2 will go to one reduce), as combiner will not get used. This is because ALGFUNC$Final is not algebraic.
One cumbersome workaround for user is to write a new udf ALGFUNC_2 which is same as ALGFUNC, except for having ALGFUNC_2$Initial same as ALGFUNC$Intermed . This ALGFUNC_2 then gets used in the last foreach .
Pig can automate this logic, and use combiner for the last foreach in above examples translation.

yeah I was just using short-hand with the distinct thing, and assumed you would know what I meant

Is there a reason not to apply algebraic functions in an algebraic fashion when non-algebraic functions are also used in GENERATE? I think there was even a ticket to make this happen.

In practice I often manually apply this optimization by rewriting COUNT(distinct bar.foo), COUNT(bar) by turning the second COUNT into a sum of counts – which is essentially manually doing the cumbersome workaround.. I wonder if there is a clean way to define / use these kinds of algebraic relationships.

Regarding two distincts – we can run the initial group-bys twice, and join?

Dmitriy V. Ryaboy
added a comment - 10/Jun/11 20:50 yeah I was just using short-hand with the distinct thing, and assumed you would know what I meant
Is there a reason not to apply algebraic functions in an algebraic fashion when non-algebraic functions are also used in GENERATE? I think there was even a ticket to make this happen.
In practice I often manually apply this optimization by rewriting COUNT(distinct bar.foo), COUNT(bar) by turning the second COUNT into a sum of counts – which is essentially manually doing the cumbersome workaround.. I wonder if there is a clean way to define / use these kinds of algebraic relationships.
Regarding two distincts – we can run the initial group-bys twice, and join?

Alan Gates
added a comment - 10/Jun/11 22:15 Is there a reason not to apply algebraic functions in an algebraic fashion when non-algebraic functions are also used in GENERATE? I think there was even a ticket to make this happen.
When we tried this in the past the performance was very bad, because you end up running all the data through the combiner (which is costly do the (de)serialization cycles) with no resulting reduction.

Thejas M Nair
added a comment - 10/Jun/11 22:31 yeah I was just using short-hand with the distinct thing, and assumed you would know what I meant
I didn't realize the mistake when I wrote the example. But short hand is more readable, i have created a PIG-2117 to discuss supporting that syntax.
Regarding two distincts – we can run the initial group-bys twice, and join?
Yes, that will work.
If the udf FUNC is algebraic and FUNC.Initial() returns something that is smaller than its argument (eg, COUNT), a further optimization would be -
in = FOREACH in GENERATE *, ALGFUNC$Initial(c4) as init;
gby_dist = GROUP in BY (c1, c2, c3) PARALLEL 100;
res_dist = FOREACH gby_dist GENERATE
group.c1, group.c2, FUNC.Initial(c3),
ALGFUNC$Intermed(in.init) as intermed;
gby = GROUP res_dist BY (c1, c2) PARALLEL 100;
res = FOREACH gby GENERATE
FLATTEN(group) as (c1, c2),
FUNC2(res_dist.c3),
ALGFUNC2(res_dist.intermed);
Where FUNC2 is like ALGFUNC2 described earlier, having FUNC2.Initial same as FUNC.Intermed .

Dmitriy V. Ryaboy
added a comment - 10/Jun/11 22:39 This is a subject for a different ticket, but to address Alan's comment: have we considered in-memory combiners as in Lin & Schatz: http://portal.acm.org/citation.cfm?id=1830263 ?

I have wondered when I have encountered cases like this why we don't have a syntax which enables a user to perform distinct on select columns in the alias like below.

distinct_user_data = distinct user_data by (user, gender);

This would avoid the need to do a group by and distinct inside nested foreach and the distinct job will use its own combiner to reduce the number of records. Changes to distinct implementation should not be also that much as you will only have to rearrange key and values additionally at the map and reduce end.

Rohini Palaniswamy
added a comment - 13/Feb/15 00:39 I have wondered when I have encountered cases like this why we don't have a syntax which enables a user to perform distinct on select columns in the alias like below.
distinct_user_data = distinct user_data by (user, gender);
This would avoid the need to do a group by and distinct inside nested foreach and the distinct job will use its own combiner to reduce the number of records. Changes to distinct implementation should not be also that much as you will only have to rearrange key and values additionally at the map and reduce end.