Resource TaskChain - Proper Async Operations and more!

For the past few months/years I've occasionally advertised my TaskChain system in IRC, which I hosted on a Gist (Pastebin on GitHub for those who don't know). It was a single class, and people could just drop into their plugins code base and use it.

Well, I've now properly moved it to a GitHub project, and released it as an artifact you can include in projects.

I took a quick look at this, and I fear that the functionality this provides is already implemented in ReactiveX

Click to expand...

Sure with a much bulkier API and not as clean integration to the game layer, or a concept of your games units for delays.
Nor does it have a concept of "run this on the games main thread" that I can see.

I sure wouldn't want to use that API in plugin development. TaskChain is designed around game development, where as ReactiveX is designed around super generic and alternating thread pools.

"Getting back to main" is the entire point of the system, which I don't see as (Easily) doable in ReactiveX.
If it is possible to do everything TC's doing, it surely won't be as clean of an API and you'll be back to a problem TC solves: Boilerplate.

If anyone has stated using TC 3.x, please update to 3.3.4 ASAP.
I have strong suspicion that there was concurrency issues with Shared Chains from 3.0.0 to 3.3.3, as things got moved around from 2.0 and cleaned up and I was seeing weird issues on my server where it looked like some chains simply did not execute at all or the chain pipeline was frozen.

I rewrote the whole queue system for it to be much simpler and easier to understand the logic, and removed all of the rule breaking that Shared Chains was doing about adding tasks after a chain had been executed.

That looks rather complex, but also really a different goal to re-engineer the event system. While I agree the event system could of been done better (Future style async events namely), TC is about managing the use of the current system, not re-engineering the event system like RX.

That looks rather complex, but also really a different goal to re-engineer the event system. While I agree the event system could of been done better (Future style async events namely), TC is about managing the use of the current system, not re-engineering the event system like RX.

Click to expand...

You're right, all I really wanted you to see was Observable#subscribeOn(syncScheduler) which does the main thread execution

You're right, all I really wanted you to see was Observable#subscribeOn(syncScheduler) which does the main thread execution

Click to expand...

hmm, I see. But yeah RX is pretty advanced, and way out of scope for most of this community

But back to TC! I'm looking for feature suggestions (ones that don't break existing API.... Not ready for a 4.0 yet!)

One idea I just had is a method that takes queue from the previous return, and can process it in sync or parallel (with configurable concurrency), and then only go to the next task in the chain once the queue is flushed.

This is possible already with callback API's, but to provide it as a clean built in method.

Very.. very cool idea. Especially integrating lambda in it, my only question is how many times do you need to do something async and then go back to sync from the looks all your examples are perfectly thread-safe.

Very.. very cool idea. Especially integrating lambda in it, my only question is how many times do you need to do something async and then go back to sync from the looks all your examples are perfectly thread-safe.

Makes you use 2 threads from the threadpool instead of just one which is just rather unnecessary.

I might just be nitpicking but this is still a very cool project.

Click to expand...

Cached Thread Pools re-use threads and only spawns one if none are free.

But in TaskChain terms, plain TaskChains aren't about thread safety. It's about getting the heavy operations off of main and then going back to main to do the API calls, then get back off main for deletion.

Shared Chains are a concurrency tool, but more so about logic than handling concurrency for you. If you need to dispatch 3 async tasks, and guarantee the order of execution, then shared chains help with that.

For my mail stuff, its critical that a user can not open a mail message before its deletion finishes... Shared chains enforce that.

A split task will block the main thread while it is running. The split method will check if there is time to continue running the task without dropping tps, otherwise it will halt execution of the thread until the next tick.

Parallel tasks should use runtime.availableProcessors() for the concurrency.

A split task will block the main thread while it is running. The split method will check if there is time to continue running the task without dropping tps, otherwise it will halt execution of the thread until the next tick.

Parallel tasks should use runtime.availableProcessors() for the concurrency.

Click to expand...

asyncParallel is initially what lead to the queue idea.
For that, i would go with this style:

One step in a chain pipeline should expect to complete before a next step would run.

Now on sync -- this wont work. Sync tasks are expected to run on the main thread. You can only do 1 thing on a thread at a time, so can't run it in parallel.

Now, you could block the main and suspend it while all tasks in the queue run async, but that will corrupt the API Design of TaskChain in that a sync task is expected to be API safe, which it no longer will be in that design.

I think an .async(task1, task2, task3) style (dropping the parallel word to avoid new terminology to understand)

As for TPS oriented method, TaskChain is no longer Minecraft bound. TC Core has no concept of TPS.

an syncQueue that only processes 1 at a time on main, and provide a TaskChain.backOffQueue(5 /* game units*/) or backOffQueue(5, TimeUnit.SECONDS);

This would provide the same results.

You would need to do your for(; 3x iteration before hand to build a queue in the previous task ,then pass it to a queue task to process.

This would accomplish your goal.

the asyncQueue would be configurable concurrency, default to processors yeah.