#349 closed defect (fixed)
TTbar mpi integration stalls for N_CPU >5 or so
Reported by: | holsch | Owned by: | Stefan Hoeche |
---|---|---|---|
Priority: | major | Milestone: | rel-2.2.0 |
Component: | ME Generator | Version: | 2.1.1 |
Keywords: | Cc: |
Description
With the attached runcard, the mpirun or mpiexec command gets stuck and never starts the integration when using -n 6, seems to depend on the machine as well. Anyway -n 20 defninitely does not work. Workaround that was used for the integration of the validation plots was (as suggested by Frank S) to use: mpirun -n 30 Sherpa -f Run.dat MI_HANDLER=None -e 1 STABLE[6]=1 STABLE[24]=1 STABLE[23]=1 STABLE[25]=1
Attachments (1)
Change History (3)
Changed 9 years ago by
comment:1 Changed 9 years ago by
Note (to myself): Peter Onyisi mentioned that this happens for him only when the top width is set to 1.5 GeV as in the ATLAS defaults. With higher widths he doesn't seem to have this problem. We should test whether this is just by accident or whether that gives us a hint to a solution of this problem.
comment:2 Changed 8 years ago by
Resolution: | → fixed |
---|---|
Status: | new → closed |
This has been fixed by Stefan in r28511 by making the Decay_Channel class MPI aware. The bugfix will be included in release 2.2.1.
Run card ttbar leptonic decays