I'm trying to implement asynchronous resource loading in my engine (by using a separate thread for I/O and decompression) for the first time in my life and i couldn't find solutions to my problem.

1) How should I implement waiting for a specific resource to be loaded?

For instance, when i'm dragging an asset from the asset tree view and dropping it onto the render viewport in my editor,i need to block the main thread and wait until the resource data is fully loaded or timeout is reached.

2) And how do I specify a timeout value for the load request?

I've read that it's usually done by using request ids/tokens/handles for manipulating individual requests (e.g.: wait/cancel/setPriority).How is it usually done?

3) How to avoid temporary allocations/extraneous copies during resource loading?

It's bad for performance when the client allocates temporary buffer, issues an async read request into that buffer, instantiates resource, frees the buffer and repeats this procedure, say, a hundred times.

Should i have one streaming buffer which is filled if the user didn't specify her own output buffer?

1) If you want to block until it's loaded, then why use async loading? The point of async loading is that you can display a nice "please wait" animation instead of blocking, and then receive a notification when your asset is ready to stop the animation and start using the asset.

2) Yeah, often the function that initiates a request returns some kind of identifier that you can use to check on it's status or abort the request. You probably don't need to build timeouts into the system at this level, as long as you've got the ability to abort at this level (even that's not required for many games) Then, the systems that use the file loader can implement user-triggered, or time-based, etc aborting, if they require it.

3) With decompression, it's common for the decompression thread to keep it's own buffers for re-use.

If you are reading a lot of files into temporary memory before parsing them and throwing the allocations away, it might make sense to allocate a small ring of buffers for the file loader to use as defaults. You'd have to collect some statistics on the size of your "temporary" files and how many are being loaded at once to make an educated guess as to the best sizes to use.

As for "allocate buffer, async read into buffer, instantiate resource from buffer, free buffer", you can often just "instantiate resource, async read into resource". I've split my async asset loading into a few steps to help with this --When you issue an async request, you provide a callback for allocating the asset, and a callback for parsing the asset. When the file size becomes known (I store a table of sizes as part of the file system, which is resident in RAM, so this is immediate), the allocation callback is triggered to create the resource. Then the resource is filled in asynchronously, and when it's done the 'parsing' callback is triggered to finalize the resource. Assets can also report an array of sizes if they require some of their data to be streamed into different allocations (e.g. a texture header for the CPU and the actual texel data for the GPU).

4) If you want to write OS-specific code, you can usually implement async file loading without any extra threads, as OS file interations are likely async natively (the blocking API is often a wrapper around the async API). However, if you want to do background decompression, that obviously does require some threading ;)

1) I need to block only in the editor mode, because the resulting code is simpler in my (small) experience:
I get a pointer and start manipulating the created mesh instance (e.g. move, rotate) or get an error screen.

2) Where can I find out more about efficient implementation of such a system (load request queue with fast access by request ids and minimum allocations) ?
Right now my queues are dynamic arrays and I perform a linear search to find requests by their ids.

3) So, your advice would be to delegate the task of managing temporary memory to the client?
(Then I'll have to think about fighting fragmentation on the client side.)