I’m a geospatial researcher working with satellite imagery (2d or 3d arrays). I wrote a script that loads a multidimensional array, based off the work at this github repo https://github.com/albhasan/gdal2scidb

My main modification was using Numpy to convert the array into bytes which allows it load into SciDB faster than a csv. My question is my destination array is two dimensional with a chunk size of 1000. Will I improve my performance if I read and load data according to chunks. Right now I might load an array with arbitrary dimensions say 500x14,300. Would taking targeted reads 1,000x10,000 that conformed to chunk size improve the loading.