Working with v.large Datasets

Does anyone know if future compilations of the core source will include the <gcAllowVeryLargeObjects> in the App config if the core framework is now on .NET 4/4.5? It certainly would help very much, either that or is there any chance of getting some further documentation of asynchronous <object>List calls?

Re: Working with v.large Datasets

Just curious about this. Does that limit apply to the 64 bit CLR? I know that 32 bit Windows basically doesn't allow an application to use more than 2GB of RAM, but that limitation isn't there in 64 bit Windows. One would think that the .NET Framework would follow that same logic (but two different teams at Microsoft, so it's anyone's guess).

Re: Working with v.large Datasets

Only you would have problems because you couldn't manipulate a dataset larger than 2 GIGABYTES in a plugin you wrote for ACT!. You've managed to brighten my day twice today already. The other time was the post about the improved performance you were seeing with ACT! v16. :-)

Re: Working with v.large Datasets

Haha, Ok I'm busted for being lazy and not fragmenting/chaining data, but still if the SDK is structured around returning non-generic collections even though we are within a .NET 4 environment and we can't make asynchronous calls, it would make sense to allow big-data.

I don't see any harm in allowing this, but agree with the other posters - a 2GB array - jeesh you are crunching some data.

I think this only applies if you are running a plugin - if you reference an instance of the framework (or connect to the data source directly) within your stand alone code then you can cast to the BigArray type (I assume your arrays are numerics), or other methods for avoiding the 2GB limitation.

Any chance we could get an idea how you are using a single, gigantic array?

Re: Working with v.large Datasets

It's actually for an internal utility for migrating data from one dB to another. The Histories and Activities data sets can get very large very quickly when you are dealing with >30 very active users in any organisation and one of the key requirements is keeping upto 5-7 years worth of institutional memory.

I grant it's quite eye watering and not all that often, but when necessary it can be a real pain. E.g upgrading to a newer version of Act! from a very much older version. The dB design is completely revamped in addition to redundant fields being deprecated from the new schema. This requires a new dB to be recreated with the new schema structure and design and the old data migrated en masse. It is purely the secondary entities that hold massive amounts of data that can cause traditional 3rd party tools to fall over and so require dedicated internal tools for this kind of task. Ideally this should be a task for something like the bcp utility in SQL Server, but since we don't have enough documentation on the schema for me to feel comfortable and confident to use full CRUD operations at the data layer, I am restricted to using the Framework.

I must admit that I am surprised that others have not run into these kinds of issues in the past, I really must be doing something wrong or just repeatedly have abnormally large data hungry clients?? *worry*worry*fret*fret*

Re: Working with v.large Datasets

Edit: I had mentioned breaking the data up, but then I re-read some of the posts and saw that Vivek knew he was being lazy in that regard. So I've removed my comment.

On a side note, yes, I think you have very data hungry clients. I think companies of that size, having 30+ active users would've moved on to one of the "larger" products (Microsoft CRM comes to mind, though I haven't really researched CRM products).