It is an interesting question, however I am not sure to understand the part about saving disk space? How can you think to do that? The bufferpool size is equal or less than the tablespace size, so there is not size reduction (well, some compression if you use deep compression, but this could reduce the performance).

If you want to have a lot of data available at a high rate, my advise is to have a bufferpool with a big block size, and this will keep continuous blocks of disk in a similar way in memory, reducing the "jumps". But this only works, if the tables in the related tablespaces can fit in that space. If not, you will waste memory.

Also, to have a well tuned tablespace according to the prefetch size and extents, according to the RAID used, and if you can adjust the seektime and transfer rate will be better.

Finally, you could have a set of SSD in order to provide a higher speed.

If the information is read only, try to do the queries with UR, that will reduce the locks.

It is an interesting question, however I am not sure to understand the part about saving disk space? How can you think to do that? The bufferpool size is equal or less than the tablespace size, so there is not size reduction (well, some compression if you use deep compression, but this could reduce the performance).

If you want to have a lot of data available at a high rate, my advise is to have a bufferpool with a big block size, and this will keep continuous blocks of disk in a similar way in memory, reducing the "jumps". But this only works, if the tables in the related tablespaces can fit in that space. If not, you will waste memory.

Also, to have a well tuned tablespace according to the prefetch size and extents, according to the RAID used, and if you can adjust the seektime and transfer rate will be better.

Finally, you could have a set of SSD in order to provide a higher speed.

If the information is read only, try to do the queries with UR, that will reduce the locks.

1. We have a 100 GB DB where we want to put RO-Versions into the cloud - since we prefer to have all data in Main Memory anyways and we have a redundant copy "at home" we don't want to have the data stored in the cloud to hard drives but in Main-Memory only.

So if DB would support In-Memory only tables we could load this data into the tables on startup remotely.

1. We have a 100 GB DB where we want to put RO-Versions into the cloud - since we prefer to have all data in Main Memory anyways and we have a redundant copy "at home" we don't want to have the data stored in the cloud to hard drives but in Main-Memory only.

So if DB would support In-Memory only tables we could load this data into the tables on startup remotely.

Create a bufferpool as large as your table, and dedicate it to the tablespace the table resides in (assuming you have only 1 table in this tablespace). I've done this with dimensional tables in our model, and the tables are staying in memory and attaining hit rates of up to 8 million rows per second.

It is an interesting question, however I am not sure to understand the part about saving disk space? How can you think to do that? The bufferpool size is equal or less than the tablespace size, so there is not size reduction (well, some compression if you use deep compression, but this could reduce the performance).

If you want to have a lot of data available at a high rate, my advise is to have a bufferpool with a big block size, and this will keep continuous blocks of disk in a similar way in memory, reducing the "jumps". But this only works, if the tables in the related tablespaces can fit in that space. If not, you will waste memory.

Also, to have a well tuned tablespace according to the prefetch size and extents, according to the RAID used, and if you can adjust the seektime and transfer rate will be better.

Finally, you could have a set of SSD in order to provide a higher speed.

If the information is read only, try to do the queries with UR, that will reduce the locks.

Andres,
Compression doesn't necessarily mean performance degradation. In most cases it's a performance gain. Rethink about it, what is the bottleneck in DB systems? It's IO operations. If you double the number of pages you can fetch in one IO operation, you seem to have double your performance.