Re: Hive Partitioning - maximum for cluster

I don't know of any hard limits. There are limitations as a table with 10k+ partitions will likely fail on operations against all partitions like 'drop table'. That is generally the soft cap on partitions per table.

For the full cluster, the backend RDBMS hosting the metastore will dictate this somewhat. Again there is no hard limit. I have seen some near 10 million partitions across all tables. Granted HMS, HS2, and CatalogD were not stable due to the large partitions count. A single or set of large queries or full table scans would bring them down each time. Your HMS heap will also be large. Hive does have settings now to prevent full partitions grabs or limiting the partition count per query.

The Hive community is moving HMS to be backed by HBase to address the scalability of partitions, tables, and databases.