I currently have implemented CBFS to mount a custom image as say drive letter G:. In addition I want to be able to access it as a block device using WIN32 APIs, such as CreateFile("\\.\g:", ..). Not sure if this is possible. Is it? Thank you.

Callback File System emulates a file system, not a block device. So it doesn't even have callbacks to read and write raw disk data. CallbackDisk 2, when released, will probably support this functionality. But I don't think we will extend Callback File System for this as this product has different functions and different design goals.

Using a combination of cbFS and cbDisk. Is this possible? I am able to get each to work individually. For example, I can use cbDisk to mount an image as block device named G: then I can access it using Win32 API, and as mentioned before I can mount a custom filesystem as G:. What I can't do is both at the same time. Is that possible with your current products? If not is this something CBDisk2 will allow. Thanks

CallbackDisk and Callback File System offer different functions for different purposes. Callback File System is meant to represent the sparsely located or distributed data as one disk. CallbackDisk, in opposite, represent one integral storage. Combining these two entities makes very little sense. If you explain, what exactly you want to achieve, we will consider adding this functionality to Callback File System 3.x (after 3.0). CallbackDisk will remain a *disk* "emulator".

I found this thread because I am considering doing the same thing... using BOTH CallbackDisk and Callback File System TOGETHER. I want to build a "remote" or "hosted" file system BUT I need cached storage locally to improve performance. My thought is to use Call Back File System as the "front end" to the OS, and display the full directory structure, but use a local blob of storage for caching and use CallbackDisk to read/write files to the blob.

Is this possible? Since this seems to be a common request, is there an example demonstrating this?

When you mount a virtual disk with CallbackDisk, you get a disk device with a drive letter.

You can use Mapper sample of Callback File System to map the directory (including the root directory) of any disk (including the one mounted by CallbackDisk) to a new virtual disk.

But for me it looks like CBFS is not necessary in this case. If the element of remote storage is a cluster of the disk stored in BLOB entry, then CallbackDisk would be enough. Alternatively you can use SolFS OS edition which gives more functionality than CallbackDisk. One more option is a combination of SolFS *Application* Edition and CBFS.

Please clarify. Not all files will be stored in the blob. And, the files that ARE in the blob will be in a custom hierarchy that is NOT a proper directory structure. The REMOTE server will contain many more files and information on the hierarchy of the directory structure.

My plan was to use CBFS to emulate the formal directory structure and then INTERCEPT the calls. I would then determine if the file was in the local cache (the blob) or not.

With CBFS you don't "intercept" anything - you handle those requests (they are not handled anywhere).

Unfortunately without more detailed description of the complete architecture (and I assume that such information is a secret that should not be present in the public forum) I can't offer you the right combination of products. In particular I don't understand how blobs could speed up operations (and what code is supposed to cache those blobs). If you need a local storage for some data, take a look at SolFS (Application Edition) - it is very handy for such intermediate storage and you can keep its pages on the remote server if needed.

I can say for sure that any architecture can be implemented with our products, but here you would have to learn more about them and decide which route to go.

We use cookies to help provide you with the best possible online experience. By using this site, you agree that we may store and access cookies on your device. You can find out more about and set your own preferences here.