I don’t have success to assign value when I have more than one attribute. Can anybody help me?
What is the sintaxe to put over ??? in the second code below?
such like this:
geometry3d[0][0][0][0][0] = (1,2,3,4,5,6,7);
where, velocity_x=1, velocity_y=2, …

So if you think that’s a complete PITA, you’re not alone. I suspect what you really want to do is to generate data over the entire array space. ie. For each simulation, for each time step, for each X and Y, generate a value. So - here’s some thoughts.

First, don’t use chunk length = 1. The whole point of our little Array DBMS is to permit folk to cluster by region. What you want is, for every chunk, to have a sizeable portion of data in it. We usually shoot for chunk sizes of between 2 and 16 meg (YMMV). What you have here, with every chunk_length = 1 is a size of 1 element - say, 8 bytes. Here’s what I suggest. You probably have a min/max range for all of the X,Y,Z and T dimensions, with some number of simulations. And you have (say) 8 bytes per attribute. So - that’s about (say) 1,000,000 elements per chunk. So try to find values for X.chunk_length, Y.chunk_length, Z.chunk_length and T.chunk_length which when multiplied get you to about 1,000,000.

My second intuition is that you don’t necessarily want more than one T per chunk. I might be wrong about this; you might be computing windows of some length through T to “smooth” your < X, Y, Z > data over time. It depends on your workload. But if what you’re doing is to compare T=t with T=t+1 (say), or your slicing 3D boxes out for Simulation=S, and T=t, then I would set your T value to 1, and set X.chunk_length, Y.chunk_length and Z.chunk_length to 100 each. That will give you 8M “chunks”, each of which contains 100x100x100 “box” in the X,Y,Z space, each for a fixed S and T. Note that I am assuming here that your 5D space is dense. If it’s not, you’ll need to adjust your chunk lengths accordingly. Chunk lengths are in the logical space.

If you’re trying to simulate this data, I wouldn’t use a bunch of build() or build_sparse() ops. The join() is ugly. I would take one of the following two options.

3.1 Use cross_join. The idea is that you can build a vector of the length you want, and then use cross() to build a large, nD array. Something like this:

Now, there are a lot of problems with this from a data generation point of view. You won’t end up with something that looks even semi-random. You’ll end up with lots of stripes and planes through the data. So - there’s another approach.

3.2 Generate the data in 1D, then use redimension_store() to convert it to the target shape. The following query generates exactly 10,000 points in the 5D space of 100x100x100x100x10. You can fiddle with the types in the query to get the ranges and types you want.

Note that when you run this script above, you’re going to create a very, very sparse array. 100x100x100x100x10 = 1,000,000,000 cells, of which you’re only going to populate 10,000. You’re asking SciDB to store more information about what isn’t there (all of the “empty” space needs to be logged) than you are about what actually is there. To overcome this, consider adjusting the chunk_length values for your dimensions.