We have Hadoop jobs that read data from our Cassandra column families andwrite some data back to another column families.The input column families are pretty simple CQL3 tables without wide rows.In Hadoop jobs we set up corresponding WHERE clause inConfigHelper.setInputWhereClauses(...), so we don't process the whole tableat once.Never the less, sometimes the amount of data returned by input query is bigenough to cause TimedOutExceptions.

To mitigate this, I'd like to configure Hadoop job in a such way that itsequentially fetches input rows by smaller portions.

I'm looking at the ConfigHelper.setRangeBatchSize() andCqlConfigHelper.setInputCQLPageRowSize() methods, but a bit confused ifthat's what I need and if yes, which one should I use for those purposes.