options

Stylizer

orig_defaulticx_defaultgcc_defaultaocc_2icx_1gcc_1

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Architecture specific option -march=native is used

Not available for this run

Not available for this run

[ 3 / 3 ] Architecture specific option -march=znver5 is used

[ 3.00 / 3 ] Architecture specific option -axCORE is used

[ 3 / 3 ] Architecture specific option -march=znver5 is used

[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.

Not available for this run

Not available for this run

[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.

[ 3.00 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.

[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.

[ 0 / 4 ] Application profile is too short (3.06 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (7.43 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 4 / 4 ] Application profile is long enough (159.97 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 0 / 4 ] Application profile is too short (3.00 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (7.45 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (5.34 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 3 / 3 ] Optimization level option is correctly used

[ 0 / 9 ] Compilation options are not available

Compilation options are an important optimization leverage but ONE-View is not able to analyze them.

[ 0 / 9 ] Compilation options are not available

Compilation options are an important optimization leverage but ONE-View is not able to analyze them.

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

Strategizer

orig_defaulticx_defaultgcc_defaultaocc_2icx_1gcc_1

[ 4 / 4 ] CPU activity is good

CPU cores are active 93.97% of time

[ 4 / 4 ] CPU activity is good

CPU cores are active 96.78% of time

[ 2 / 4 ] CPU activity is below 90% (62.43%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 4 / 4 ] CPU activity is good

CPU cores are active 94.22% of time

[ 4 / 4 ] CPU activity is good

CPU cores are active 97.26% of time

[ 4 / 4 ] CPU activity is good

CPU cores are active 97.81% of time

[ 4 / 4 ] Affinity is good (96.55%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.79%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (99.91%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (96.79%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.84%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.65%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (68.56%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (76.21%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (12.15%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (71.25%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (75.34%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (40.89%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.00%) lower than cumulative innermost loop coverage (68.56%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.00%) lower than cumulative innermost loop coverage (76.21%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.00%) lower than cumulative innermost loop coverage (12.15%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.00%) lower than cumulative innermost loop coverage (71.25%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.00%) lower than cumulative innermost loop coverage (75.34%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.00%) lower than cumulative innermost loop coverage (40.89%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 4 ] A significant amount of threads are idle (10.15%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 4 / 4 ] Threads activity is good

On average, more than 95.12% of observed threads are actually active

[ 2 / 4 ] A significant amount of threads are idle (37.61%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 4 / 4 ] Threads activity is good

On average, more than 90.18% of observed threads are actually active

[ 4 / 4 ] Threads activity is good

On average, more than 95.59% of observed threads are actually active

[ 4 / 4 ] Threads activity is good

On average, more than 96.28% of observed threads are actually active

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (68.56%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (76.21%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (12.15%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (71.25%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (75.34%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (40.89%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 0 / 2 ] More than 10% (58.24%) is spend in Libm/SVML (special functions)

The application is heavily using special math functions (powers, exp, sin etc…) proper library version have to be used. Exact accuracy needs have to be evaluated. Perform input value profiling, first count how many different input values. Recompile with -ffast-math or -Ofast to help/enable vectorization of loops calling math functions.

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (68.31%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (76.13%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (11.82%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (70.95%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (75.25%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (40.67%), representing an hotspot for the application

Optimizer

Analysisr_1r_2r_3r_4r_5r_6
Loop Computation IssuesPresence of expensive FP instructions210212
Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA032030
Presence of a large number of scalar integer instructions101101
Control Flow IssuesPresence of calls111111
Presence of 2 to 4 paths201200
Presence of more than 4 paths010010
Data Access IssuesPresence of constant non-unit stride data access111100
Presence of indirect access001000
More than 10% of the vector loads instructions are unaligned010011
Presence of special instructions executing on a single port020021
More than 20% of the loads are accessing the stack214211
Vectorization RoadblocksPresence of calls111111
Presence of 2 to 4 paths201200
Presence of more than 4 paths010011
Presence of constant non-unit stride data access111100
Presence of indirect access001000
Inefficient VectorizationPresence of special instructions executing on a single port020021
Use of masked instructions200201
×