Cubes, partitions and performance

Sorry about reposting this here as a new thread, but maybe the post was so deep in the thread that nobody pay attention to it.

Here it goes again.

I'm trying to speed up the cube processing time, by means of partitions.
Let's suppose I have defined the partitions in such a way that I'm getting processed just the more recent records.
So I process just the partition with the last added records.
The problem is with the virtual cubes that rely on my just recent processed cube.

How can I avoid processing all the virtual cube? there is no partitions in the virtual cube.....

All the time I gain processing just the partition with new data is lost when I'm forced to do full process (or refresh data) in the virtual cube.

What is the criteria of partition, as it can have great performance advantages for Analysis Services and make BI data easier to manage.

Cube processing is included in the tasks performed by Master Update, but cube processing is not the bottleneck: cube processing performs at more than twice the speed of Master Update. Thus, the RDBMS operations performed by Master Update deserve the most careful characterization.

One of the technet document refers :
Another approach might be to add a clustered index on the time key field of the fact table in the Staging database, which should enable Master Update to select the rows that belong in each partition table in the Subject Matter database more quickly. A clustered index slows Master Import somewhat, but its impact can be minimized if you presort the data.

I must have not read your question in the last tread. In regards to Virtual cube processing I don't think it is necessary since the virtual cubes only contain the structure of the source cube and not the actual data, think of the a virtual cube as a View in SQL Server. Only process a virtual cube the first time you create it and if you change the schema of the source cubes i.e calculated members, dimensions, measures etc.

Why don't you test this out for your self for your own feature reference. On a non production cube for instance the FoodMart Cube create a Virtual Cube then process it, notice that it doesn't ask for storage mode nor do any dimensions get processed.

"Because virtual cubes store only their definitions and not the data of their component cubes, they require virtually no physical storage space."

Then modify the data in the source cube then reprocess the source cube, notice how the data changes to source cube are applied to the virtual cube.

BTW I did say that the virtual cube requires a process if the structure changes in the source cube.

Ok.<br /><br />I've tested it, and you're right.<br /><br />There's no need to process the virtual cube.<br />It seems it is like a view after all.<br /><br />There's only one thing I don't quite understand.<br /><br />Why is that when you process a cube (a cube, not a partition) in Analisys Manager (don't matter if you do refresh data o full process) the program automatically process the virtual cubes that rely on the one just processed?<br /><br />This was one of the thing that confused me.<br />Is is some kind of error of Analisys Manager? Why is that it automatically reprocess the virtual cube?<br /><br />Thanx a lot.<br />Gracias manito.<br />We'll have a taco and tequila any time you like <img src='/community/emoticons/emotion-5.gif' alt=';-)' /><br />