Im a registered user of the SolFS tools
We are experiencing low performance on reading operation on SolFS archive. It is said on the web site that evaluation version is voluntarily decreasinf the performance of the tool.

I have tried to:
1) Change the registration key used.
2) No register the key provided at all.

In both case I was not able to measure any change in opperation speed, which leads me to the conclusion I may not be properly registered in first instance.

Is there a bullet-proof way to ensure that Im properly registered. (Im do not receive a popu message or any other indication...)
Ty

There's no way to check if the key has been set correctly. But lack of speed difference means that inefficiency of your settings is quite significant and it "hides" the speed delays of the evaluation version. If you describe your usage scenario (how many files of what size are added/deleted/read/written) we will suggest you some ways to optimize operations.

The main choke point is when we try to access large file contained in the archive. We are using file commonly not under 250 MB, but that can be as big as 20 GB.

We have a collection of small contained file in separate folder, and ussully one (or few) large contained file.

To measure the performance on the tool, Im using a single file with a single contained 240 MB file.

We have experimented with different Buffering, but have found the best value being 1.
We have experimented with different PageSize and have found that the optimal value is 32K since we are reading large chunk of data sequentially, before jumping to new location. Decreasing the pagesize value does not help us.

It took me some times to build a small sample project to demonstrate my points, sorry.

I will just need a way to send my zip file to you. I tried to embed it in the forum post and it does not work. I am not able to acces your e-mail address too for some unknown reason.

Procedure: The sample file I provided is quite large.
Please copy the "sample.data"file contained file to c:\
Please also make a copy of the same file to the same location but with name "sample.data2"

(This later copy is to ensure that we can jump from one file to the other, so that Windows caching does not impact our performance test conclusion.

The provided source code will:
1)open the first Master file with SolFS
2)Open the fisrt contained file (situated at root level to simplify code)
3)Open a second contained file at same location
4) Read block of data from first file, interspaced with smaller read of the second file.
5) Open the Second Master file using standard dotNet stream
6) PErform the same kind of reading patterns for comparative purpose.

Conclusion, depending of cache usage, I get between 3 to 10 time better performance using standard dotNet stream than going through SolFS system.
Obviously that is not an option for our application, which handle lots of files, but illustrate the cost of performance that I would like to alleviate.

I will continue to monitor this case for the user DRichard Richard. He is my colleague and we are working together to optmize file reading to display volumetric data.

Since it is been moved to Help Desk we didn't hear anything about this problem. I want to have an update about it if you have any.

We've completed the code sample that was provided to you. One of our modification was to create a class that inherits from your original SolFS stream class and implement inside another cache (fixed size of 64K bytes) over the original one. We were able to double the performance of reading operations. Sincerly we don't understand how this could happen.

When we read a 500MB file:

-With our cache:

-Reading by chunks of 50 bytes (sometimes we cannot do otherwise) we obtain a
performance degradation of at least 30% (~90 Mb/sec) transfert rate.
-Reading by chunks of 1500 bytes we obtain ~120 MB/sec transfert rate.

-Without our cache (pure SolFS stream only):

-Reading by chunks of 50 bytes, we obtain ~1 MB/sec of transfert rate.
-Reading by chunks of 1500 bytes, we obtain ~40MB/sec of transfert rate.

-If we read the same SolFS file with the FileStream class provided by .NET
instead we obtain ~90 MB/sec of transfert rate.

We would expect to have a maximum of 10%-15% performance loss using SolFS storage comparing with the .NET FileStream class.

We've been also very suprised that the method Length and Position on the SolFS stream class take a lot of processing time during profiling sessions. During those profiling session, we have noticed that a lot of processing time is consumed by the page loading methods used internally to maintain the cache.

I think it would be nice to have a special mode when reading file to accelerate the performance. In our case, if a file contains acquired data from a device, after the acquisition is terminated, the file cannot change anymore. So may be some optimization could be done to increase the reading performance.

If you want I can provide to you the updated sample of code. Also if you can setup a ftp link, I can send you a typical data file we acquire during inspection.

I would like to know what you think about all the information I wrote in this post and tell you that is very important to us to reach the performance of .NET file stream class and increase the performance of reading small chunks < 200 byes.

We use cookies to help provide you with the best possible online experience. By using this site, you agree that we may store and access cookies on your device. You can find out more about and set your own preferences here.