トップ回答者

Axum Implementation

質問

After reading the Language Spec and Programmer's Guide, I remember a note stating that the user had 500,000 waiting agents. This sounds like the micro-thread idea behind Stackless Python - is the implementation along the same lines?

I noticed in one of my tests that having an agent block to wait for a receive completely removes any (noticable) processing in that agent, where as, say, setting a polling mechanism in the agent causes the agent to continue execution (obviously). Is there an under-the-hood scheduler/manager for agents to share one thread ala Stackless?

I'm waiting for the day when some bright spark implements an ASP.NET server in this akin to Yaws (Erlang web server).

One concept I haven't yet had time to get to grips with in the Interaction Point objects (OrderedInteractionPoint). Where do they come into play? I made a simple app where two agents constantly message each other with an incremented number and print it (co-routine), purely by using send and receive statements.

Sorry for the vague questions.

Adam

2009年5月15日 20:15

回答

The compiler performs a transformation similar to that of C# iterators (yield return). Briefly, when encountering an asynchronous call (such as receive), we return from the method, hoisting the locals in the method frame class; the rest of the method is transformed into a continuation that runs upon completion of the the asynchronous method. If all the methods in the call chain are asynchronous, we return all the way to the Thread Pool, releasing the thread. We have a post on this that describes it more detail here:

The feature that you're talking about, and which both Erlang and stackless Python use, is a well-known and old technique called 'linked stacks.' Under this system, you do not allocate a contiguous block of memory to use for a program stack, but create each method frame on the stack.

Axum uses linked stacks optionally -- they also have a cost associated with them, and it is therefore only beneficial when you are going to block on message receives a lot. Generally, you have to declare a method 'asynchronous' in order for it to use a linked frame. The exception is the agent constructor, which is meant to be the root frame (first invocation) of all Axum user code. It is our intent that all agent constructors eventually will use linked frames, but since we haven't done any work on the VS debug engine, it makes it very hard to debug your code. Therefore, it is under compiler control: with the command-line compiler, use /async to make the agent constructors asynchronous; with the IDE, set the 'Asynchronous Agent Constructor' property on each Axum project.

The Axum runtime does use very few threads when you use linked frames -- in my example, I was running 500,000 blocked agents with 6 threads (which is the same number as I got for just starting the application)!! We're using the IO thread pool for our scheduling, which is a very efficient implementation for the threads that we do need.

OrderedInterationPoint are intended for communication between agents in the same domain: they are less expensive and less capable than channels; unlike channels, they don't enforce value semantics (no need, typically).

すべての返信

The compiler performs a transformation similar to that of C# iterators (yield return). Briefly, when encountering an asynchronous call (such as receive), we return from the method, hoisting the locals in the method frame class; the rest of the method is transformed into a continuation that runs upon completion of the the asynchronous method. If all the methods in the call chain are asynchronous, we return all the way to the Thread Pool, releasing the thread. We have a post on this that describes it more detail here:

The feature that you're talking about, and which both Erlang and stackless Python use, is a well-known and old technique called 'linked stacks.' Under this system, you do not allocate a contiguous block of memory to use for a program stack, but create each method frame on the stack.

Axum uses linked stacks optionally -- they also have a cost associated with them, and it is therefore only beneficial when you are going to block on message receives a lot. Generally, you have to declare a method 'asynchronous' in order for it to use a linked frame. The exception is the agent constructor, which is meant to be the root frame (first invocation) of all Axum user code. It is our intent that all agent constructors eventually will use linked frames, but since we haven't done any work on the VS debug engine, it makes it very hard to debug your code. Therefore, it is under compiler control: with the command-line compiler, use /async to make the agent constructors asynchronous; with the IDE, set the 'Asynchronous Agent Constructor' property on each Axum project.

The Axum runtime does use very few threads when you use linked frames -- in my example, I was running 500,000 blocked agents with 6 threads (which is the same number as I got for just starting the application)!! We're using the IO thread pool for our scheduling, which is a very efficient implementation for the threads that we do need.

OrderedInterationPoint are intended for communication between agents in the same domain: they are less expensive and less capable than channels; unlike channels, they don't enforce value semantics (no need, typically).

The compiler performs a transformation similar to that of C# iterators (yield return). Briefly, when encountering an asynchronous call (such as receive), we return from the method, hoisting the locals in the method frame class; the rest of the method is
transformed into a continuation that runs upon completion of the the asynchronous method. If all the methods in the call chain are asynchronous, we return all the way to the Thread Pool, releasing the
thread. We have a post on this that describes it more detail here: