options

Executable Output


* Info: Selecting the 'perf-low-ppn' engine for node inti6221


* Info: "ref-cycles" not supported on inti6221: fallback to "cpu-clock"
* Warning: Found no event able to derive walltime: prepending cpu-clock                      :-) GROMACS - gmx mdrun, 2022.4 (-:

Executable:   /ccc/work/cont001/ocre/oserete/gromacs-2022.4-install-icc-ompi/bin/gmx_mpi
Data prefix:  /ccc/work/cont001/ocre/oserete/gromacs-2022.4-install-icc-ompi
Working dir:  /ccc/work/cont001/ocre/oserete/GROMACS_DATA
Command line:
  gmx_mpi mdrun -s ion_channel.tpr -nsteps 10000 -pin on -deffnm icc


Back Off! I just backed up icc.log to ./#icc.log.2#
Reading file ion_channel.tpr, VERSION 2020.3 (single precision)
Note: file tpx version 119, software tpx version 127
Overriding nsteps with value passed on the command line: 10000 steps, 25 ps
Changing nstlist from 10 to 80, rlist from 1 to 1.129


Using 1 MPI process
Using 52 OpenMP threads 


Overriding thread affinity set outside gmx mdrun

Back Off! I just backed up icc.edr to ./#icc.edr.2#
starting mdrun 'Protein'
10000 steps,     25.0 ps.

Writing final coordinates.

Back Off! I just backed up icc.gro to ./#icc.gro.2#

               Core t (s)   Wall t (s)        (%)
       Time:     3621.897       69.652     5200.0
                 (ns/day)    (hour/ns)
Performance:       31.014        0.774

GROMACS reminds you: "Push It Real Good" (Salt 'n' Pepa)

* Info: Process launched (host inti6221, process 1121912)
* Info: Process finished (host inti6221, process 1121912)

Your experiment path is /ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_icc_OMP52_pasv_rerun/tools/lprof_npsu_run_0

To display your profiling results:
####################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                              COMMAND                                                              #
####################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_icc_OMP52_pasv_rerun/tools/lprof_npsu_run_0      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_icc_OMP52_pasv_rerun/tools/lprof_npsu_run_0  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_icc_OMP52_pasv_rerun/tools/lprof_npsu_run_0  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_icc_OMP52_pasv_rerun/tools/lprof_npsu_run_0  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_icc_OMP52_pasv_rerun/tools/lprof_npsu_run_0      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_icc_OMP52_pasv_rerun/tools/lprof_npsu_run_0  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_icc_OMP52_pasv_rerun/tools/lprof_npsu_run_0  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_icc_OMP52_pasv_rerun/tools/lprof_npsu_run_0  #
####################################################################################################################################################################

×