Th=
anks for the quick answer. It's good to know=A0Thrift's=
limit on the amount of data it will accept / send.

=
div>

I know the hard limit is=A02 billion columns per row. My=
question is at what size it will slowdown read/write performance and=A0mai=
ntenance. =A0The blog I reference said the row size should be less than 10M=
B.

It'll be better if Cassandra can transparently shar=
d/split the wide row and then=A0distribute=A0them to many nodes, to help th=
e=A0load balancing.

Are there a=
ny other ways to model historical data (or=A0time-series-data)=A0besides wi=
de row column slicing in Cassandra?

> Hello experts,
>
> Based on this blog of Basic Time Series with Cassandra data modeling,<=
br>
> http://rubyscale.com/blog/2011/03/06/basic-t=
ime-series-with-cassandra/
>
> "This (wide row column slicing) works well enough for a while, bu=
t over time, this row will get very large. If you are storing sensor data t=
hat updates hundreds of times per second, that row will quickly become giga=
ntic and unusable. The answer to that is to shard the data up in some way&q=
uot;
>
> There is a limit on how big the row size can be before slowing down th=
e update and query performance, that is 10MB or less.
>
> Is this still true in Cassandra latest version? or in what release Cas=
sandra will remove this limit?
>
> Manually sharding the wide row will increase the application complexit=
y, it would be better if Cassandra can handle it transparently.
>
> Thanks,
> Charlie | DBA & Developer
>
>
> p.s. Quora link,
> ht=
tp://www.quora.com/Cassandra-database/What-are-good-ways-to-design-data-mod=
el-in-Cassandra-for-historical-data
>
>
>