Firstly, I would like my program to dynamically assign it self to one of the cores it pleases and remain bound to it until it later reschedules itself.

Ralph Castain wrote:

>> "If you just want mpirun to respect an external cpuset limitation, it already does so when binding - it will bind within the external limitation"

In my case, the limitation is enforced "internally", by the application once in begins execution. I enforce this during program execution, after the mpirun has finished "binding within the external limitation".

Brice Goglin said:

>> "MPI can bind at two different times: inside mpirun after ssh before running the actual program (this one would ignore your cpuset), later at MPI_Init inside your program (this one will ignore your cpuset only if you call MPI_Init before creating the cpuset)."

Noted. In that case, during program execution, whose binding is respected - mpirun's or MPI_Init()'s? From the above, is my understanding correct? That MPI_Init() will be responsible for the 2nd round of attempting to bind processes to cores and can override what mpirun or the programmer had enforced before its call (using hwloc/cpuset/sched_load_balance()and other compatible cousins) ?

--------------------------------------------

If this is so, in my case the flow of events is thus:

1. mpirun binds an MPI process which is yet to begin execution. So mpirun says: "Bind to some core - A" (I don't use any hostfile/rankfile. but I do use the --bind-to-core flag)

2. Process begins execution on core A

3. I enforce: "Bind to core B". (we must remember, it is only at runtime that I know what core I want to be bound to and not while launching the processes using mpirun). So my process shifts over to core B