Now I'm working with a scientific project in which we use fortran-cuda from PGI. We have found that PGI's omp support is so poor that we can not have paralleled omp task in some situation by pgfortran. So we decided to pull out something out and make it C++ and compile it with g++.

The other reason why we want to mix PGI's pgfortran compiler with GNU's g++ is that we need to use the pgfortran to compile the cuda-fortran part of our code and we also need to use NVCC from Nvidia( which use g++ to compile the host code) to compile that OMP segments which have CUDA in them and which can not be compiled correctly by pgfortran.

At last when I link the C++ object files compiled by g++ and the Fortran object files compiled by pgfortran together, if I don't put a -lgomp option there, there will be link errors. But if I put it there, the behavior of OMP threads is weird. The Fortran can only recognize 1 threads while the C++ can recognize the total 10 threads.

I think the problem is that the two different compiler use different OMP library and gomp is GNU's library. So anyone knows how to link them correctly? Or can someone tell me how to link the PGI's OMP library?

Unfortunately, OpenMP implementations from different vendors are typically not compatible so you will not be able to mix the two different OpenMP.

Note that your best bet is to continue with PGI. It looks like you first reported your issue with our OpenMP implementation back in June. (Logged as TPR#19410) Our engineers determined that our implementation of this particular OpenMP tasking behavior was legal and valid, but is different then what was implemented by GNU and Intel. Hence in order to be consistent with the other vendors, we will update our behavior to match theirs. It was too late to get the update into the 13.7 release, but will be available in September's 13.9 release. (there wont be an August 13.8 release).