NOTE: I appended the title, to reflect some new development. The old title was 'rhoCentralFoam solver with Slip BCs fails in Parallel Only'.

Hi OpenFoam Users!
I am trying to solve the flow over a very small cylinder at near transonic Mach numbers using rhoCentralFoam (OF version 2.1.0).

The issue is the solver does not have any Floating Point Exception Error when running as a single process. When I decompose the mesh and run on 4 cores it gives Floating Point Errors.

On debugging (when running parallel) the error was at the point when mu was calculated using sutherlands Law because the T was -ve. Once again at this time step T is not -ve when using a single processor.

When running with constant boundary (wire wall) there is not problem running single/parallel.

So switiching back the the slip boundary conditions. When running on a single processor none of the fileds are bounded (they are all reasonably valued). However, when running in parallel, very early in the run, the values of rho, rhoU, U fields are WAY off like an order of 10^6 off!.

I have attached my modified solver "boundedRhoCentralFoam" and the case I am investigating "wire".

The case has two initial conditions. The 0 used slip/temp jump condtions at the wire's wall. The 0.org has constant wall conditions.

I believe the issue is with the BCs defined in the rhoCentralFoam solver:

smoluchowskiJumpTFvPatchScalarField

maxwellSlipUFvPatchVectorField

I have reached my limit of debuging this problem, I would really appericate any help sugestion on this.

Hi
Just an update on this front. After more investigation it seems to me that the boundaries created by the processor patch may as well be the cause of the issue. I am not entirely sure why this is so.

To issustrate this, I decomposed my mesh using scotach. I also fixed the deltaT to a very small value and also made the writeInterval equal to that value. This way I output results at every time step. The solver gives a Floating Point Error during the third time step when run on 4 cores. (I guess buy now I do not have to state that on a single processor the solver continues without any error and the solution does seem to form.)

Seeing the results in Paraview is really interesting. It shows slight variation at the processor patch region. Take a look at the images below. The first is with the part of the mesh that was with 'processor2' the second images is without. There is a change in the T (shown, but also in other flow field variables) at the boundary of the patches. Is this normal?

My understanding is the processor patch does not influence the solution in any way. Why is it doing so in this case? Could this explain why there is no error in a single processor and error in Parallel since the solution is getting destorted? Does this mean that my eariler assumption that the slip BC being the issue is incorrect?