I have created a solver based on icoFoam to solve for the time dependent concentration distribution of a contrast agent dissolved in the fluid. The aim of the simulation is to recover the concentration values of the agent for applications with blood flow simulation. The considered domain consists of a Y shaped junction, time varying boundary conditions have been specified on the inlet for the velocity and the concentration values describing a periodic change in the concentration and velocity magnitude values.

The problem is that when using a tetrahedral mesh generated with a library (vmtk based on TetGen) some abrupt values are obtained for the concentration values, that are clearly wrong (like 50000 when the values should be between 0 and 1) and contrary to the expected smoothness of the distribution the concentration values have discontinuities (see images).

The pressure and velocity fields seem to be fine, its only the transported scalar field that seem the be problematic.

Based on visual inspection the mesh seems fine (contains uniformly distributed tetrahedral elements), and checkMesh sais its ok. The used timestep is 0.01s.

When running the same simulation on the same geometry with a hex mesh (generated with snappyHexMesh) with comparable mesh resolution the simulation seems to run fine.

The question is if anyone should have an idea about what could possibly go wrong in this (very simple case)?

The boundary condition settings (/0/) and the solver settings are based on the elbow tutorial example (2D pipe flow with junction) + modifications for T (the scalar field that is transported)

I have created a solver based on icoFoam to solve for the time dependent concentration distribution of a contrast agent dissolved in the fluid. The aim of the simulation is to recover the concentration values of the agent for applications with blood flow simulation. The considered domain consists of a Y shaped junction, time varying boundary conditions have been specified on the inlet for the velocity and the concentration values describing a periodic change in the concentration and velocity magnitude values.

The problem is that when using a tetrahedral mesh generated with a library (vmtk based on TetGen) some abrupt values are obtained for the concentration values, that are clearly wrong (like 50000 when the values should be between 0 and 1) and contrary to the expected smoothness of the distribution the concentration values have discontinuities (see images).

The pressure and velocity fields seem to be fine, its only the transported scalar field that seem the be problematic.

Based on visual inspection the mesh seems fine (contains uniformly distributed tetrahedral elements), and checkMesh sais its ok. The used timestep is 0.01s.

When running the same simulation on the same geometry with a hex mesh (generated with snappyHexMesh) with comparable mesh resolution the simulation seems to run fine.

The question is if anyone should have an idea about what could possibly go wrong in this (very simple case)?

The boundary condition settings (/0/) and the solver settings are based on the elbow tutorial example (2D pipe flow with junction) + modifications for T (the scalar field that is transported)

As a first attempt I will try a limited scheme on div(phi,T): you are using a pure central differencing scheme, which notoriously can give boundedness problems. Theoretically speaking, the Gamma scheme should be fine, as it was designed to handle also unstructured meshes. The correct syntax is:

div(phi,T) Gauss Gamma 1;

If the problem persist you can try also other limited schemes (see the User/Programmers guides or take a look here in the forum about the usage of limited schemes in general).

Hope this helps

V.

PS-BTW, what scheme are you using for solving the velocity field? And what about the maxCo value related to your DeltaT?

mpeti

October 18, 2011 07:36

Dear Vesselin,

Thank you very much for the idea, using the Gamma differencing scheme you have suggested seemed to resolve the stability problem for the transported scalar field.

To answer your questions, for the velocity the following settings are used:

In case the maxCo you referred to means the Courant number, its maximum is larger than 1, around 2.5 most of the time. The mean CN is around 0.2. As far as I know the Courant number value should be less then 1 to gain meaningful results. However further reducing the time step should make the simulation time higher, and my question is if it really is worth the additional effort to warrant that the condition is not violated? Is there any generic principle to say that for example in case the residual stays low enough and the results are believable than the settings should be fine even thought the CN is higher than 1 in some cases?

Peter

P.S.: The purpose of the simulation is to compute the concentration of contrast agent in the blood flow from which simulated x-ray projection images (angiograms) can be generated similar to those acquired for diagnostic purposes. The simulation is needed to provide ground truth information about velocity values for image analysis algorithms working with the angiographic data, and thus extreme accuracy in the simulation is not required.

vkrastev

October 18, 2011 10:08

Quote:

Originally Posted by mpeti
(Post 328391)

To answer your questions, for the velocity the following settings are used:

div(phi,U) Gauss limitedLinearV 1;

As you can see, your velocity field was already solved with a limited scheme on the convective term, that's why velocities didn't blow up as your scalar field.

I

Quote:

Originally Posted by mpeti
(Post 328391)

n case the maxCo you referred to means the Courant number, its maximum is larger than 1, around 2.5 most of the time. The mean CN is around 0.2. As far as I know the Courant number value should be less then 1 to gain meaningful results. However further reducing the time step should make the simulation time higher, and my question is if it really is worth the additional effort to warrant that the condition is not violated? Is there any generic principle to say that for example in case the residual stays low enough and the results are believable than the settings should be fine even thought the CN is higher than 1 in some cases?

Well, this is a not so obvious question...The problem with the time step length is first of all a stability (not an accuracy) problem: if the time integration scheme is not purely implicit, then the maximum Co number should be less than 1 to avoid instabilities and (eventually) "explosions" of the numerical solution. The "good news" are that usually the OpenFOAM unsteady solvers (and icoFoam is one of them) employ a purely implicit time integration scheme which, in principle, allows a max Co number higher than 1. However, also with this kind of schemes, some care must be taken in the time step length choice, because the bigger the time step the lower will be the accuracy in time and it could become so poor that the simulation will crash anyway or give at least very inaccurate results. So, from my experience, I can give you the following advices:

1) when possible, try to keep the max Co number around 1 also with implicit unsteady solvers like icoFoam (a good way to do it could be introducing adaptive time stepping: you can very easily modify your code looking at the pimpleFoam solver, which has this capability built in).

2) if you want to run yor simulation with large time-steps, you should base your code on the pimpleFoam solver procedure: here, an outer loop correction procedure is introduced which, cycling many times on a single time step, allows you to retain good accuracy and stability with a max Co far larger then 1. Of course, there should be a trade off between the number of outer cycles and the max Co (instead you will not have any time saving), but I can tell you that I was able to run a high-Re turbulent unsteady simulation with a max Co of about 3.4 and only one outer iteration, which is approximately the same as running with a max Co of about 1.7 and a single time step integration.

Hope this (also) helps

V.

mpeti

October 18, 2011 11:05

If you assume your flow to be laminar (based on the expected behavior of the system) than should the ability to use increased time steps in pimpleFoam make the computations faster than when using the simpler icoFoam solver? (pimpleFoam does contain turbulence modeling right?)

Thanks a lot,
Peter

vkrastev

October 18, 2011 11:25

Quote:

Originally Posted by mpeti
(Post 328455)

If you assume your flow to be laminar (based on the expected behavior of the system) than should the ability to use increased time steps in pimpleFoam make the computations faster than when using the simpler icoFoam solver? (pimpleFoam does contain turbulence modeling right?)

Thanks a lot,
Peter

pimpleFoam resolves turbulent as well as laminar flows (you have only to set the simulationType as laminar in your turbulenceProperties dictionary), so the answer is (theoretically) yes, but to reach the best speed/stability/accuracy performance you'll have to play a bit with the number of outer correctors, time step length and (eventually) underrelaxation factors (pimpleFoam allows also to underrelax all the variables before starting a new outer iteration). Also, modifying the pimpleFoam solver for your purposes will be a little more difficult (but I think you can do the effort), because the icoFoam solver structure is the simplest possible while laminar/turbulent solvers are slightly more complicate.