Help is available by moving the cursor above any symbol or by checking MAQAO website.
[ 4 / 4 ] Application profile is long enough (185.86 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 12.54 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (70.29%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] Threads activity is good
On average, more than 99.37% of observed threads are actually active
[ 4 / 4 ] CPU activity is good
CPU cores are active 99.37% of time
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (61.67%), representing an hotspot for the application
[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (64.29%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 4 / 4 ] Affinity is good (99.92%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 3 / 3 ] Functions mostly use all threads
Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.00%)
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (6.00%) lower than cumulative innermost loop coverage (64.29%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 2 - kmeans-gcc-O3 | Execution Time: 61 % - Vectorization Ratio: 18.18 % - Vector Length Use: 26.14 % | |
►Loop Computation Issues | 2 | |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
►Loop 1 - kmeans-gcc-O3 | Execution Time: 6 % - Vectorization Ratio: 0.00 % - Vector Length Use: 18.75 % | |
►Control Flow Issues | 2 | |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Vectorization Roadblocks | 1002 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Loop 15 - kmeans-gcc-O3 | Execution Time: 2 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 30 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (119.58 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 12.63 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (70.38%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] Threads activity is good
On average, more than 151.45% of observed threads are actually active
[ 3 / 4 ] CPU activity is below 90% (75.72%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (61.62%), representing an hotspot for the application
[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (64.31%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 3 / 4 ] Affinity stability is lower than 90% (76.91%)
Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map.
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 0 / 3 ] Too many functions do not use all threads
Functions running on a reduced number of threads (typically sequential code) cover at least 10% of application walltime (45.15%). Check both "Max Inclusive Time Over Threads" and "Nb Threads" in Functions or Loops tabs and consider parallelizing sequential regions or improving parallelization of regions running on a reduced number of threads
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (6.07%) lower than cumulative innermost loop coverage (64.31%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 2 - kmeans-gcc-O3 | Execution Time: 61 % - Vectorization Ratio: 18.18 % - Vector Length Use: 26.14 % | |
►Loop Computation Issues | 2 | |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
►Loop 1 - kmeans-gcc-O3 | Execution Time: 6 % - Vectorization Ratio: 0.00 % - Vector Length Use: 18.75 % | |
►Control Flow Issues | 2 | |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Vectorization Roadblocks | 1002 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Loop 15 - kmeans-gcc-O3 | Execution Time: 2 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 30 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (87.58 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 13.13 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (70.27%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] Threads activity is good
On average, more than 205.28% of observed threads are actually active
[ 2 / 4 ] CPU activity is below 90% (51.32%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (61.72%), representing an hotspot for the application
[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (64.50%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 2 / 4 ] Affinity stability is lower than 90% (52.92%)
Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map.
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 0 / 3 ] Too many functions do not use all threads
Functions running on a reduced number of threads (typically sequential code) cover at least 10% of application walltime (61.51%). Check both "Max Inclusive Time Over Threads" and "Nb Threads" in Functions or Loops tabs and consider parallelizing sequential regions or improving parallelization of regions running on a reduced number of threads
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (5.78%) lower than cumulative innermost loop coverage (64.50%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 2 - kmeans-gcc-O3 | Execution Time: 61 % - Vectorization Ratio: 18.18 % - Vector Length Use: 26.14 % | |
►Loop Computation Issues | 2 | |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
►Loop 1 - kmeans-gcc-O3 | Execution Time: 5 % - Vectorization Ratio: 0.00 % - Vector Length Use: 18.75 % | |
►Control Flow Issues | 2 | |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Vectorization Roadblocks | 1002 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Loop 15 - kmeans-gcc-O3 | Execution Time: 2 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 30 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (71.32 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 13.00 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (70.38%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] Threads activity is good
On average, more than 251.95% of observed threads are actually active
[ 1 / 4 ] CPU activity is below 90% (31.50%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (61.70%), representing an hotspot for the application
[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (64.48%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 1 / 4 ] Affinity stability is lower than 90% (32.92%)
Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map.
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 0 / 3 ] Too many functions do not use all threads
Functions running on a reduced number of threads (typically sequential code) cover at least 10% of application walltime (75.25%). Check both "Max Inclusive Time Over Threads" and "Nb Threads" in Functions or Loops tabs and consider parallelizing sequential regions or improving parallelization of regions running on a reduced number of threads
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (5.91%) lower than cumulative innermost loop coverage (64.48%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 2 - kmeans-gcc-O3 | Execution Time: 61 % - Vectorization Ratio: 18.18 % - Vector Length Use: 26.14 % | |
►Loop Computation Issues | 2 | |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
►Loop 1 - kmeans-gcc-O3 | Execution Time: 5 % - Vectorization Ratio: 0.00 % - Vector Length Use: 18.75 % | |
►Control Flow Issues | 2 | |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Vectorization Roadblocks | 1002 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Loop 15 - kmeans-gcc-O3 | Execution Time: 2 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 30 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (63.26 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 12.82 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (70.37%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] Threads activity is good
On average, more than 285.32% of observed threads are actually active
[ 0 / 4 ] CPU activity is below 90% (17.83%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (61.61%), representing an hotspot for the application
[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (64.41%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 0 / 4 ] Affinity stability is lower than 90% (18.82%)
Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map.
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 0 / 3 ] Too many functions do not use all threads
Functions running on a reduced number of threads (typically sequential code) cover at least 10% of application walltime (85.13%). Check both "Max Inclusive Time Over Threads" and "Nb Threads" in Functions or Loops tabs and consider parallelizing sequential regions or improving parallelization of regions running on a reduced number of threads
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (5.96%) lower than cumulative innermost loop coverage (64.41%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 2 - kmeans-gcc-O3 | Execution Time: 61 % - Vectorization Ratio: 18.18 % - Vector Length Use: 26.14 % | |
►Loop Computation Issues | 2 | |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
►Loop 1 - kmeans-gcc-O3 | Execution Time: 5 % - Vectorization Ratio: 0.00 % - Vector Length Use: 18.75 % | |
►Control Flow Issues | 2 | |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Vectorization Roadblocks | 1002 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Loop 15 - kmeans-gcc-O3 | Execution Time: 2 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 30 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (59.12 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 12.47 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (70.69%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] Threads activity is good
On average, more than 308.11% of observed threads are actually active
[ 0 / 4 ] CPU activity is below 90% (9.63%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (61.03%), representing an hotspot for the application
[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (64.68%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 0 / 4 ] Affinity stability is lower than 90% (10.55%)
Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map.
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 0 / 3 ] Too many functions do not use all threads
Functions running on a reduced number of threads (typically sequential code) cover at least 10% of application walltime (90.58%). Check both "Max Inclusive Time Over Threads" and "Nb Threads" in Functions or Loops tabs and consider parallelizing sequential regions or improving parallelization of regions running on a reduced number of threads
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (6.01%) lower than cumulative innermost loop coverage (64.68%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 2 - kmeans-gcc-O3 | Execution Time: 61 % - Vectorization Ratio: 18.18 % - Vector Length Use: 26.14 % | |
►Loop Computation Issues | 2 | |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
►Loop 1 - kmeans-gcc-O3 | Execution Time: 6 % - Vectorization Ratio: 0.00 % - Vector Length Use: 18.75 % | |
►Control Flow Issues | 2 | |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Vectorization Roadblocks | 1002 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Loop 15 - kmeans-gcc-O3 | Execution Time: 3 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 30 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (57.74 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 12.02 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (71.18%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] Threads activity is good
On average, more than 321.05% of observed threads are actually active
[ 0 / 4 ] CPU activity is below 90% (6.69%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (60.48%), representing an hotspot for the application
[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (65.22%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 0 / 4 ] Affinity stability is lower than 90% (7.50%)
Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map.
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 0 / 3 ] Too many functions do not use all threads
Functions running on a reduced number of threads (typically sequential code) cover at least 10% of application walltime (92.68%). Check both "Max Inclusive Time Over Threads" and "Nb Threads" in Functions or Loops tabs and consider parallelizing sequential regions or improving parallelization of regions running on a reduced number of threads
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (5.95%) lower than cumulative innermost loop coverage (65.22%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 2 - kmeans-gcc-O3 | Execution Time: 60 % - Vectorization Ratio: 18.18 % - Vector Length Use: 26.14 % | |
►Loop Computation Issues | 2 | |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
►Loop 1 - kmeans-gcc-O3 | Execution Time: 5 % - Vectorization Ratio: 0.00 % - Vector Length Use: 18.75 % | |
►Control Flow Issues | 2 | |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Vectorization Roadblocks | 1002 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Loop 15 - kmeans-gcc-O3 | Execution Time: 4 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 30 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
[ 4 / 4 ] Application profile is long enough (56.97 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 3 / 3 ] Optimization level option is correctly used
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer
-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.
[ 3 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 12.17 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 0 ] Fastmath not used
Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.
[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (71.49%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] Threads activity is good
On average, more than 329.00% of observed threads are actually active
[ 0 / 4 ] CPU activity is below 90% (5.14%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Loop profile is not flat
At least one loop coverage is greater than 4% (59.71%), representing an hotspot for the application
[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (65.70%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 0 / 4 ] Affinity stability is lower than 90% (5.84%)
Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map.
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 0 / 3 ] Too many functions do not use all threads
Functions running on a reduced number of threads (typically sequential code) cover at least 10% of application walltime (93.53%). Check both "Max Inclusive Time Over Threads" and "Nb Threads" in Functions or Loops tabs and consider parallelizing sequential regions or improving parallelization of regions running on a reduced number of threads
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (5.80%) lower than cumulative innermost loop coverage (65.70%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 2 - kmeans-gcc-O3 | Execution Time: 59 % - Vectorization Ratio: 18.18 % - Vector Length Use: 26.14 % | |
►Loop Computation Issues | 2 | |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
►Loop 1 - kmeans-gcc-O3 | Execution Time: 5 % - Vectorization Ratio: 0.00 % - Vector Length Use: 18.75 % | |
►Control Flow Issues | 2 | |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Vectorization Roadblocks | 1002 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
○ | [SA] Non innermost loop (InBetween) - Collapse loop with innermost ones. This issue costs 2 points. | 2 |
►Loop 15 - kmeans-gcc-O3 | Execution Time: 5 % - Vectorization Ratio: 10.00 % - Vector Length Use: 23.75 % | |
►Loop Computation Issues | 6 | |
○ | [SA] Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA - Reorganize arithmetic expressions to exhibit potential for FMA. This issue costs 4 points. | 4 |
○ | [SA] Presence of a large number of scalar integer instructions - Simplify loop structure, perform loop splitting or perform unroll and jam. This issue costs 2 points. | 2 |
►Data Access Issues | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Vectorization Roadblocks | 12 | |
○ | [SA] Presence of indirect accesses - Use array restructuring or gather instructions to lower the cost. There are 3 issues ( = indirect data accesses) costing 4 point each. | 12 |
►Loop 30 - kmeans-gcc-O3 | Execution Time: 0 % - Vectorization Ratio: 40.00 % - Vector Length Use: 27.50 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Data Access Issues | 6 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |
►Vectorization Roadblocks | 7 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 3 issues ( = data accesses) costing 2 point each. | 6 |