How can we improve Azure Storage?

Ability to delete from Table Storage by Partition

There is no good way to delete multiple entries in Table Storage. All you can do is delete one at a time. For logging tables this can become VERY expensive. It would be great if we had the ability to use Partitions as a way to delete logical groups of data in Table Storage in a single transaction.

This would allow for a partitioning scheme for grouping data in units that can easily be deleted. For example, logging data in WADLogsTable or rolling tables of data captured on a given partition can be archived easily and cleaned up.

Can someone advise when or if Azure Data Lake Analytics will support ADL Gen 2? Currently, it only supports Gen1 However, we need ADLA functionality with Gen2. Actually, lots of current customers using the Gen1 platform won't migrate to Gen2 because ADLA support is currently missing. Changing processes over to Databracks, HDInsight, etc may not provide the same functionality or may even be overkill.

It's been well over a year since the response on this, has there been some progress in that time? The ability to delete by partition key would be awesome, especially for customers like myself who can't structure our data such that partitions are tables instead. Deleting millions of rows with EGTs, after querying them all out, isn't great!

You can perform a batch delete but that is restricted to 100 rows. I also find it flustering that if I convert a Table Entity to a model for my client and back to a table entity in my data layer, TableBatchOperation Delete never finds my entity. I must instead query for the entity then batch delete the retrieved entity. What is going on internally in TableEntity that is preventing the conversion of entity=>model=>entity? All Delete should care about is the PartitionKey and RowKey. However, this appears to not be the case.

Having to retrieve entities first or having to know the entity partition and row key first and then issuing a separate delete request for every entity is not practical (and is expensive) for large historical tables (which is one of the primary use cases for table storage!).