# Known issues in parallel

### From Gerris

Revision as of 11:49, 10 March 2012Zaleski (Talk | contribs) (→A two-box simulation gives different results with one or two pids - added residual.bias mystery) ← Previous diff |
Revision as of 11:55, 10 March 2012Zaleski (Talk | contribs) (→A two-box simulation gives different results with one or two pids - added mpi version info) Next diff → |
||

Line 39: |
Line 39: | ||

m4: yes | m4: yes | ||

</pre> | </pre> | ||

+ | First we run without mpi on MacOS 10.7.3 on a MacBook Pro with a four-core intel i7 system, | ||

+ | then with mpi. The mpi and compiler versions are | ||

+ | <pre> | ||

+ | % mpicc --version | ||

+ | Apple clang version 3.1 (tags/Apple/clang-318.0.54) (based on LLVM 3.1svn) | ||

+ | Target: x86_64-apple-darwin11.3.0 | ||

+ | Thread model: posix | ||

+ | % mpirun --version | ||

+ | mpirun (Open MPI) 1.5.4 | ||

- | Here is the result of running without mpi on MacOS 10.7.3 on a MacBook Pro with a four-core intel i7 system. | + | Report bugs to http://www.open-mpi.org/community/help/ |

+ | </pre> | ||

+ | Here is the result without mpi: | ||

<pre> | <pre> | ||

% gerris2D twobox-twopid.gfs | % gerris2D twobox-twopid.gfs | ||

Line 51: |
Line 62: | ||

residual.infty: 6.400e+00 6.251e-04 2 | residual.infty: 6.400e+00 6.251e-04 2 | ||

</pre> | </pre> | ||

- | On the other hand, if we run the same simulation with mpi (openmpi installed with macports) | + | On the other hand, if we run the same simulation with mpi |

and two pids, this is the result: | and two pids, this is the result: | ||

<pre> | <pre> | ||

Line 72: |
Line 83: | ||

We have also run the same test case on an Ubuntu system. Here are the results. | We have also run the same test case on an Ubuntu system. Here are the results. | ||

+ | |||

1. non-mpi | 1. non-mpi | ||

## Revision as of 11:55, 10 March 2012

## A two-box simulation gives different results with one or two pids

We run a simplified version of the Benard Von-Karman test case for zero time steps. Only the approximate projection is performed. The modified simulation file is

2 1 GfsSimulation GfsBox GfsGEdge {} { Time { iend = 0 } Refine 6 Solid (x*x + y*y - 0.0625*0.0625) AdvectionParams{ scheme = none } OutputTime { istep = 1 } stderr OutputProjectionStats { istep = 1 } stderr OutputSimulation { start = 0.1 step = 0.1} simulation.gfs { variables = U,V,P } } GfsBox { id=1 pid=0 left = Boundary { BcDirichlet U 1 } } GfsBox { id=2 pid=1 right = BoundaryOutflow } 1 2 right

We use the following version of Gerris

% gerris2D -V gerris: using 2D libgfs version 1.3.2 (120310-112425) compiled with flags: -DBSD_SOURCE -D_DARWIN_C_SOURCE -D_DARWIN_C_SOURCE MPI: yes pkg-config: yes m4: yes

First we run without mpi on MacOS 10.7.3 on a MacBook Pro with a four-core intel i7 system, then with mpi. The mpi and compiler versions are

% mpicc --version Apple clang version 3.1 (tags/Apple/clang-318.0.54) (based on LLVM 3.1svn) Target: x86_64-apple-darwin11.3.0 Thread model: posix % mpirun --version mpirun (Open MPI) 1.5.4 Report bugs to http://www.open-mpi.org/community/help/

Here is the result without mpi:

% gerris2D twobox-twopid.gfs step: 0 t: 0.00000000 dt: 1.000000e-01 cpu: 0.12000000 real: 0.12236900 Approximate projection niter: 13 residual.bias: -1.000e-01 -1.984e-04 residual.first: 5.020e-02 9.960e-05 1.6 residual.second: 5.668e-01 1.330e-04 1.9 residual.infty: 6.400e+00 6.251e-04 2

On the other hand, if we run the same simulation with mpi and two pids, this is the result:

% mpirun -np 2 gerris2D twobox-twopid.gfs step: 0 t: 0.00000000 dt: 1.000000e-01 cpu: 0.04000000 real: 0.03555900 Approximate projection niter: 4 residual.bias: -1.000e-01 -7.446e-05 residual.first: 5.020e-02 3.839e-05 6 residual.second: 5.668e-01 4.914e-05 10 residual.infty: 6.400e+00 2.713e-04 12

The two results are different: the pre-iteration Projection statistics (first column) are the same but the post-iteration Projection statistics are different. However, since there are the same number of boxes, the mpi communication should send the same information that is exchanged between boxes in the non-mpi run. Thus something is amiss in the way information is exchanged between boxes.

Using larger number of boxes and pids (typically 24), we found cases where the non-mpi runs converge but the mpi run do not converge, i.e. the residual is not reduced below the required minimum of 0.001 .

We have also run the same test case on an Ubuntu system. Here are the results.

1. non-mpi

% gerris2D cylinder.gfs step: 0 t: 0.00000000 dt: 5.263158e-03 cpu: 0.19000000 real: 0.20179100 Approximate projection niter: 13 residual.bias: -5.020e-02 -9.960e-05 residual.first: 5.020e-02 9.960e-05 1.6 residual.second: 5.668e-01 1.330e-04 1.9 residual.infty: 6.400e+00 6.251e-04 2

2. mpi

% mpirun -np 2 gerris2D cylinder.gfs step: 0 t: 0.00000000 dt: 5.263158e-03 cpu: 0.05000000 real: 0.05292300 Approximate projection niter: 4 residual.bias: -5.020e-02 -3.737e-05 residual.first: 5.020e-02 3.839e-05 6 residual.second: 5.668e-01 4.914e-05 10 residual.infty: 6.400e+00 2.713e-04 12

Notice the results are the same, except for the cpu times and residual.bias. the pre and post iteration residual.bias values are divided by two when going from MacOS to Ubuntu.