I need to process large amounts of tabular data of mixed type - strings and doubles. A standard problem, I would think. What is the best data structure in Matlab for working with this?

Cellarray is definitely not the answer. It is extremely memory inefficient. (tests shown below). Dataset (from stats toolbox) is horribly time and space inefficient. That leaves me with structarray or struct of arrays. I did a test across all four different options for both time and memory below and it seems to me the struct of arrays is the best option for the things I tested for.

I am relatively new to Matlab and this is a bit disappointing, frankly. Anyway - looking for advice on whether I am missing something, or if my tests are accurate/reasonable. Am I missing other considerations besides access/conversion/memory usage that are likely to come up as I code more using this stuff.
(fyi am using R2010b)

**** Test #1: Access speed
Accessing a data item.

cellarray:0.002s
dataset:36.665s %<<< This is horrible
structarray:0.001s
struct of array:0.000s

I would say that a mixed set of specific data structures is almost always better than a flexible data structure of mixed data. What you want to guarantee is "contiguity" when possible/meaningful. If you want multiple data structures to appear under the same base name, or to be able to pass a "whole set of mixed structures" under a unique variable/name to functions, use a struct.

To illustrated, your best option if you want to store assets names, codes, and prices, is probably to build the following struct and access its fields using logical indexing when possible:

Thanks - makes sense. I will try this "struct of arrays" approach ("mixed set of specific data structures").

Now what matters also is readability / general ease of "working with the data". I can test memory+space easily, but insufficient Matlab experience to know these other possible factors.

For example, one issue with struct-of-array or cell-of-array is row indexing. It is easy to do, for example, sc(1:10,:) to pick out the first 10 rows from my structarray. A structarray of 10 rows is returned. However, to get the same with a struct-of-array, I need:

[num2cell(sac.Var1(1:10)), sac.Var2(1:10), ...]

Any thoughts/comments, especially from experience of others who have had to deal with similar issues in a largish Matlab program?

What about for struct-of-array or cell-of-array?
What I can think of is - use sort to get an index from the columns you want to sort by, and then overwrite all columns using this index (along the lines of link above). Painful, no?

I'm a bit surprised to discover that sortrows works on cells. Nothing in the documentation about it.

Anyway, the fact that it requires multiple lines of code wouldn't make it "painful" in my book. You can always encapsulate the multiple lines in your own mfile and just reuse that.

The issue you point out is also only an issue when the columns being sorted are of mixed type. My approach might be to convert all the numeric data to strings and then concatenate the columns being sorted into one big string matrix. Then I can run sortrows on that.

As best I can tell, you haven't tested a "cell of arrays", i.e., instead of having a 100000x10 cell array, have a 1x10 cell array where each c{i} contains an array of a column of data. Should be similar to "struct of arrays", but with easier indexing.

Beyond that, nothing in your tests is very unexpected. You have a large amount of data and have to be careful not to scatter it discontiguously in memory. Successive cell/struct elements cannot be held contiguously in memory, because they hold non-homogeneous data types. Numeric and string arrays are contiguous, however, so by grouping things into large numeric/string sub-arrays where possible, you maximize data contiguity, which leads to efficiencies both in access speed and memory usage.

As for "dataset", I cannot comment, since I don't have the Stats Toolbox. However, a mixed data table with 100000 rows is uncommonly large in my experience. I don't think you would ever see it in an Excel spreadsheet, for example. If dataset was meant to be "Excel-like", I can imagine 100000 rows being usage outside of what the designers anticipated.

Thanks for your response.
struct of arrays vs. cell of arrays: The difference to me appears to be one of named indexing (struct) vs. numbered indexing (cell). Anything else that would make one approach preferable to the other?

As Sean says, nothing of great consequence. There are small differences in storage since structs need to allocate memory for field names. Maybe small differences in indexing speed, too, to convert string indices to numeric ones.

Thought I would post some of my thoughts after looking at this problem a bit more.

I don't see an efficient data structure in Matlab for managing heterogeneous tabular data. The best I can do is a struct of vectors where each vector is either a numeric matrix or a cellarray depending on the data type I am storing. Then it is up to me to keep the vectors of equal length and write accessor function(s) to efficiently get/set arbitrary "sub-matrices" of the heterogeneous data matrix.

As noted above - dataset is too inefficient both in space and time. Cellarray is too inefficient in space.
---
Taking this route, I run into other issues to do with copy-on-write semantics - but that's for another thread.

It would not be too difficult to create your own class for that if you can precisely define what you need, moreover if you know OOP but never used it in MATLAB. Let us know if you are interested, this would be a good "case/pretext/application" to make the step towards OOP. It would not be more efficient than managing numeric arrays and cell arrays, as you would essentially build a wrapper around these structures with proper methods to manage size/indexing, but it would make the whole clean and consistent.