-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathBuildInstallGuide.txt
More file actions
2016 lines (1568 loc) · 103 KB
/
BuildInstallGuide.txt
File metadata and controls
2016 lines (1568 loc) · 103 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
Open|SpeedShop (O|SS)
Build and Installation Guide
Version 2.4.0
November 11, 2018
Introduction 3
What is CBTF and how does it relate to O|SS? 3
A new O|SS build method: SPACK 4
Some Initial Notes for the install-tool build method 4
Prerequisite Packages 6
Ubuntu/Debian: 6
RedHat/Fedora: 7
SLES/SUSE: 7
Building from the Release Tarballs (located on https://www.openspeedshop.org/downloads) 9
Installation Information 10
Building the new Qt4/Qt5 based O|SS GUI 17
Module Files, Dotkits, and softenv files 18
Install tool example commands from various systems 19
Generic Laptop or Desktop Platform Installation Examples 19
Build only the krell-root 19
Build cbtf components using the krell-root 19
Build only OSS using the cbtf components and the krell-root 19
Build everything: the krell-root, the cbtf components, and OSS using cbtf instrumentor 19
Cray Platform Install Examples 21
Instructions for building O|SS CBTF based versions on Cray platforms 21
Building for the compute nodes 21
Setup up the build environment for building for the compute nodes 21
Build the compute node version of krellroot 22
Build the compute node versions of O|SS for the CBTF version 23
Building for the front-end or login nodes 24
Setup up the build environment for building for the front-end or login nodes 24
Build the front-end node version of krellroot 24
Build the front-end node versions of cbtf, cbtf-krell, cbtf-argonavis, cbtf-lanl 24
Build the front-end node version of O|SS for the CBTF version 25
Blue Gene Systems (only offline supported) Install Examples 25
Compute node – O|SS collectors and runtimes 25
Build O|SS collectors and runtimes without using krell root components 25
Build krell root components for compute node 26
Build O|SS collectors and runtimes using krell root components 26
Front end node – O|SS viewer 26
Build O|SS Viewer not using krell root components 26
Build krell root components 26
Build O|SS viewer using krell root components 26
Intel MIC (KNL) Platform Install Examples 26
Intel MIC (KNC) Co-Processor Platform Install Examples 26
Instructions for building O|SS CBTF based versions on Intel MIC platforms 27
Building for the compute nodes 27
Setup up the build environment for building for the compute nodes 27
Build the compute node version of krellroot 27
Build the compute node versions of cbtf, cbtf-krell, cbtf-argonavis, cbtf-lanl 27
Build the compute node versions of O|SS for the CBTF version 27
Building for the front-end or login nodes 27
Build the front-end node version of krellroot 27
Build the front-end node versions of cbtf, cbtf-krell, cbtf-argonavis, cbtf-lanl 28
Build the front-end node version of O|SS for the CBTF version 28
ARM Platform Install Examples 29
Instructions for O|SS CBTF based versions on ARM platforms 29
Building for the ARM platform 29
Setup up the build environment for building for the front-end or login nodes 29
Build the krellroot - components needed to support building cbtf and O|SS 29
Build cbtf, cbtf-krell, cbtf-argonavis, cbtf-lanl 29
Build O|SS for the CBTF version 29
To Run O|SS 30
Environment Setup 30
OPENSS_PLUGIN_PATH 30
LD_LIBRARY_PATH 30
DYNINSTAPI_RT_LIB 30
OPENSS_MPI_IMPLEMENTATION (for offline mode of operation) 30
PATH 31
OPENSS_RAWDATA_DIR (offline mode of operation only) 31
OPENSS_DB_DIR 31
Runtime Environment Examples 31
Generic Cluster Module File Example 31
O|SS module file example 32
Generic Cluster Dotkit File Example 33
O|SS dotkit example 33
Blue Gene/Q Softenv File Example 34
Offline version (no krell root usage) 34
Offline version (with krell root usage) 35
Cray Platform Module File Example 35
Cray platform CBTF version module file example 36
Intel MIC Platform Module File Examples 37
Intel MIC (KNL) platform CBTF version module file example 37
Intel MIC (KNC co-processor) platform CBTF version module file example 37
ARM platform CBTF version module file example 38
Power 8 platform CBTF version module file example 38
How to use SPACK to build O|SS 38
Spack Introduction 39
Quick start guides to get started building O|SS with Spack 40
Building cbtf, cbtf-krell, and openspeedshop using spack 40
Building the cbtf-argonavis-gui (new Qt4/Qt5 GUI) using spack 41
Example usage following quick start guide commands 41
Getting the latest spack sources, i.e. the development branch and updating to the latest source. 43
Details about O|SS specific spack options 44
What components are supported 44
Component Variants Explained 44
Module file creation is automatic 45
Restricted network access solution 45
Introduction
This document supports the Open|SpeedShop (O|SS) 2.4 release and subsequent updates.
A new and preferred method of building and installing O|SS on laptops, desktops, and clusters using the spack build/package manager is now available. Please see the Spack Build and Install guider for more information. There is also a section (How to use SPACK to build O|SS) on building with Spack below. However, if that method does not work for your needs, this document describes the native install methods that have been used to build O|SS for some time now.
O|SS release 2.4 comes with an install script (install-tool) that will build all the supporting components (also referred to as the krellroot) and O|SS itself, in one step. Or the builder can invoke the script once to build the supporting component set (krellroot) and another time to build O|SS.
The install script relies on arguments to the script instead of environment variables like the previous install script (install.sh). With install-tool, the environment variables are set internally to the script so that the builder is not required to keep track of the numerous build environment variables that O|SS uses. In addition, more build environment variables are set by default, requiring less builder-required action.
The new install script also supports building the Component Based Tool Framework (CBTF) for the new O|SS CBTF instrumentor version that is described below.
What is CBTF and how does it relate to O|SS?
CBTF stands for Component Based Tool Framework and is a key scalability feature for the new version of O|SS that uses CBTF as its instrumentation methodology. The O|SS team has completed the base functionality for the process of changing the mechanisms that gather the performance data and transfer the performance data from the application, where it was gathered, to the O|SS client where it is stored into an SQLite database file. In order for O|SS to operate efficiently at high rank and thread counts it could not continue to write raw data files to disk and then read them up at the end of the application execution. This is a bottleneck, which is too time-consuming at high processor counts. So, CBTF version of O|SS using the new CBTF gathering and transfer mechanisms which transmit raw performance data to the client tool without writing files. This will be significantly faster than the offline version. CBTF can also be used independently from O|SS to rapidly prototype tools. Please see the CBTF wiki at: https://github.com/OpenSpeedShop/cbtf/wiki for more information.
A new O|SS build method: SPACK
General information about the spack build/package manager can be found at: https://github.com/spack/spack/wiki. Spack is a flexible package manager for HPC. It is intended to let you build for many combinations of compiler, architectures, dependency libraries, and build configurations, all with a friendly, intuitive user interface. Spack draws some ideas from Homebrew and Nix, with its own pure Python package format, extensive internal support for managing software dependency graphs, and powerful command line syntax.
The O|SS team has been working on creating spack package files for building O|SS under the spack build system. The O|SS build under spack is generally working for builds on clusters and pc/laptop type systems. Cray support is not ready yet.
Once spack is downloaded and set up, to build O|SS can be as easy as this command:
spack install openspeedshop
See the spack section “How to use SPACK to build O|SS” below for more specific information.
Some Initial Notes for the install-tool build method
* NOTE: with version 2.3.1(and beyond) the O|SS build will build a cmake version that works to build our software (version 3.2.2). This cmake will be used whenever the install-tool is invoked. In addition to some of the open source components using the “cmake” build tools to build their components, the O|SS team is now using cmake as the build tool for O|SS, and the CBTF components that the team supplies. So, cmake is now a build pre-requisite package.
* NOTE: If you have install-tool build scripts, for version 2.2 and 2.3, please replace “--build-cbtf” with “--build-cbtf-all”, as we are now using cmake as our underlying build tool instead of GNU autotools. With cmake we are building cbtf, cbtf-krell, cbtf-argonavis, and cbtf-lanl separately. When the GNU autotools were used we built all of the CBTF components within one build. Now the CBTF components are individually built (four separate builds) when “--build-cbtf-all” is used.
* Caution: boost-1.60 to 1.62 versions cause compile errors when building the cbtf components and the new cuda GUI. These are known boost problems. Please use a different version of boost (1.66 and above) when building O|SS.
* Caution: When installing a new version, please install into an empty/clean directory, as we do not clean the install directories out upon install-tool invocation. You have end up with multiple copies of the same component, which can lead to execution issues.
* We refer to root or krell-root packages or building the krell-root. What does this mean? These are the external packages that O|SS and/or CBTF uses as part of the tool. Sometimes one or several of these external packages are not available on the platform that O|SS and/or CBTF is being installed on, so we have provided the ability to build and install them. The root/external packages are provided as a convenience. You could install these packages without the use of the krell-root on the install system, if desired.
* The CBTF version of O|SS is now the default and official version of O|SS, as of the 2.3 release. The offline version is now legacy, but the offline capabilities have been built into the CBTF version and is usable on most platforms (not Blue Gene systems).
* You can install O|SS in non-standard locations (root permission is not needed)
* Versions of O|SS 2.3 and beyond can be installed using the information in this document.
* NOTE: O|SS is normally built with the GNU compilers, but now building with the Intel compiler is supported. Options to the install-tool have been created to allow building with either the gnu or intel compilers. Using the gnu compiler is the default.
o --build-compiler gnu
o --build-compiler intel
* O|SS cannot be built with PGI, Cray, or other compilers. Even though O|SS only builds with GNU or Intel compilers, O|SS supports performance analysis of applications built with a wide variety of compilers, including GNU, Intel, PGI, Pathscale, and Cray.
* There are a number of development environment packages that are required if you are starting from a clean install on your desktop or laptop. See the Prerequisite Packages section for the list of packages needed on non-development systems. Most laboratory systems normally have all the required packages installed, because most tools need these packages to build and execute successfully.
* There is now an option to build the krell root components that might be missing on the system or needed separately from O|SS. The builder can build the krell root then use that installation to build CBTF and/or O|SS. Or, you can choose to build all components into the same install directory.
* The install-tool script supports building both O|SS modes of execution:
o cbtf – Performance information is transported from the application across a TCP/IP network as it is collected and stored in the O|SS database on the fly.
o NOW LEGACY: offline – Write raw performance information to shared file system disk files while the application is executing. Then convert the raw data files to an O|SS database file when the application completes execution.
* install-tool script options for building O|SS are listed and described below:
o To build the new CBTF version use three invocations of the install-tool script individually, in the order listed.
* “--build-krell-root”
* “--build-cbtf-all”
* “--build-oss”
o NOW LEGACY: To build the offline version use two invocations of the install-tool script individually, in the order listed.
* “--build-krell-root”
* “--build-offline”
* The examples below are for illustration and will need adjustment based on the software installations on your particular system.
* On systems with heterogeneous front-end and compute node processor sets, multiple install-tool invocations are required. Examples of these types of systems are the BG/Q, some Cray systems, or on any other systems where targeted builds are needed. We suggest a separate set of builds, one set for the front-end viewer tool build and another set of builds for the compute node runtimes and performance data collectors build.
o For building the collectors and runtimes that execute on the compute node:
* “--build-krell-root” using --target-arch=<platform>
* “--build-cbtf-all” using --target-arch=<platform>
o NOW LEGACY: For building the client tool that runs on the front-end node:
* “--build-krell-root”
* “--build-offline”
o Note: It appears that some systems with heterogeneous front-end versus compute node processor types default to the compute node compilers. Most of our instructions assume the default compilers are front-end compilers. Putting “CXX=g++” and “CC=gcc” in front of the install-tool command when building the front-end version of O|SS may get around this issue on platforms where the defaults are not what we expected (defaults to compute node compiler version).
* Note: In the openspeedshop-release-2.4 subdirectory “startup_files” there are several install-tool build scripts that have been used to build O|SS on several platforms. These are available to use as a guide for installs on other new platforms.
Prerequisite Packages
There are some prerequisite packages that are necessary for building and running the O|SS performance tool. Most will be present on a system that is used for debugging and performance analysis. This is true of most national laboratory clusters, Cray, and Blue Gene platforms. However, if you are installing on machines without the necessary tool supporting packages installed then this section might be helpful.
Ubuntu/Debian:
It is necessary to apply these packages to the base system installation, if they are not already installed, in order to successfully build O|SS and the external tool specific packages required by O|SS. Example required packages based on Ubuntu 16.04 install.
Packages to aid in development:
sudo apt-get install git cmake cvs
Packages to aid in downloading and building:
sudo apt-get install flex bison libxml2-dev python-dev g++ make patch environment-modules libz-dev binutils-dev libdwarf-dev libelf-dev
Packages that install-tool will build if you don’t have them installed: (but makes build shorter if these are installed)
sudo apt-get install automake autoconf libtool m4 libltdl-dev(--build-autotools)
sudo apt-get install binutils binutils-dev (--build-binutils)
sudo apt-get install libelf1 elfutils libelf-dev (--build-libelf)
sudo apt-get install sqlite3 (--build-sqlite)
If the package: qt3-apps-dev is available use it, if not, you may need these (libx11-dev libxext-dev) instead.
RedHat/Fedora:
It is necessary to apply a list of supporting packages to the base system installation, if they are not already installed, in order to successfully build O|SS and the external tool specific packages required by O|SS. This list is somewhat variable depending on what was installed during the initial installation and prior to attempting to install O|SS. Previous experiences have resulted in this list of candidate packages:
yum install -y rpm-build \
gcc gcc-c++ \
openmpi \
patch \
autoconf automake \
elfutils-libelf elfutils-libelf-devel \
libxml2 libxml2-devel \
binutils binutils-devel \
python python-devel \
flex bison bison-devel bison-runtime \
libtool libtool-ltdl libtool-ltdl-devel cmake git
SLES/SUSE:
It is necessary to apply these packages to the base system installation, if they are not already installed, in order to successfully build O|SS and the external tool specific packages required by O|SS.
Packages to aid in development:
Modules git
Packages to aid in downloading and building:
flex bison rpm libxml2 libxml2-dev python-dev g++ make patch cmake
Packages that install-tool will build if you don’t have them installed: (but makes the build shorter if these are installed)
qt3 qt3-devel
If your system was installed with any development package group, then the need for extra package installs may be reduced significantly.
Using the install-tool that comes with the O|SS release will install many of the key open source tool related components, but the above mentioned system components are required to be installed by the system administrator.
Building from the Release Tarballs (located on https://www.openspeedshop.org/downloads)
The release top-level directory contains the install script (install-tool) that is intended to make the build and installation of the external tool support components (known as the krell root) that are needed to support O|SS and O|SS itself easier.
Because O|SS depends on components that are highly dependent on operating system interfaces and libraries, it is difficult to produce executable rpms for every installation platform. This is the reason why we offer the source build installation instead of rpms.
Installation Information
Please gunzip and untar the openspeedshop-release-2.4.0 tar.gz file and change directory into the openspeedshop-release-2.4 directory.
For example:
tar -xzvf openspeedshop-release-2.4.0.tar.gz
cd openspeedshop-release-2.4
Inside the top-level release directory is the key script, install-tool that can be used in building the O|SS performance tool.
The script, install-tool has a help option:
“install-tool –-help“
That can provide information about the possible options a builder of O|SS could use.
The typical build for O|SS is done with an install line something like this:
./install-tool --build-krell-root
--krell-root-prefix /opt/krellroot_v2.4.0
--with-openmpi /opt/openmpi-1.8.2
./install-tool --build-cbtf-all
--cbtf-prefix /opt/cbtf_v2.4.0
--krell-root-prefix /opt/krellroot_v2.4.0
--with-openmpi /opt/openmpi-1.8.2
./install-tool --build-oss
--openss-prefix /opt/osscbtf_v2.4.0
--cbtf-prefix /opt/cbtf_v2.4.0
--krell-root-prefix /opt/krellroot_v2.4.0
--with-openmpi /opt/openmpi-1.8.2
The first install line above builds all the external (krell root) packages that O|SS needs and installs them in /opt/krellroot_v2.4.0. The second install line above uses those external packages to build the Component Based Tool Framework (CBTF). It also uses the openmpi MPI implementation to build the O|SS MPI experiment collectors. The third install line uses both the external packages built by the krell root install line and the cbtf package built by the cbtf-all install line to build the O|SS client tools.
Hint:
To get the paths for the "--with-mpt" or other mpi or other “--with” option clauses is to:
1) module load mpi-sgi/mpt.2.12r26
2) echo $LD_LIBRARY_PATH
3) Look for the MPT path (up to the "/lib" and use that in the "--with-mpt" clause in the install-tool command.
Example: echo $LD_LIBRARY_PATH
/nasa/sgi/mpt/2.12r26/lib:/home4/jgalarow/python_root/lib
Use this --with-mpt clause: “--with-mpt /nasa/sgi/mpt/2.12r26”
However, on some platforms the typical build install line is not adequate. Platforms like the Cray and Blue Gene requires more options to the install-tool script. If you are familiar with the previous install.sh script method of building O|SS, in that build scenario extra environment variables needed to be set. With install-tool, we need additional arguments to the script to specify things like the target platform, so there is a --target-arch <target> option available in the install-tool script. For example, to build on a Cray system, this is an example build line:
./install-tool --build-krell-root –target-arch cray
--krell-root-prefix /tmp/work/<userid>/krellroot_v2.4.0
--with-boost <boost install directory>
--with-mpich2 /opt/cray/mpt/default/gni/mpich2-gnu/47
./install-tool --build-cbtf-all –target-arch cray
--cbtf-prefix /tmp/work/<userid>/cbtf_v2.4.0
--krell-root-prefix /tmp/work/<userid>/krellroot_v2.4.0
--with-mpich2 /opt/cray/mpt/default/gni/mpich2-gnu/47
./install-tool --build-oss –target-arch cray
--openss-prefix /tmp/work/<userid>/osscbtf_v2.4.0
--krell-root-prefix /tmp/work/<userid>/krellroot_v2.4.0
--with-boost <boost install directory>
--with-mpich /opt/cray/mpt/default/gni/mpich2-gnu/47
One can specify their own versions of the components needed by O|SS to build properly or you can rely on the defaults that are installed on the system and used automatically by the build script. If the default system installed component is determined to be not adequate for the O|SS build (missing development headers) then the install script will build a version of that component into the externals/root directory. If the builder has a specific version of a component, such as, PAPI, they may use the “--with-papi” option to specify the path to that installation and O|SS will be built using that installation of PAPI.
One can also build individual components and specify the location to place the component. Here is the “install-tool --help” output:
$
$ ./install-tool --help
usage: ./install-tool [option]
--help, -h
This help text.
Introduction:
This install script can be used to build the krell externals package (--build-krell-root),
the CBTF components and libraries (--build-cbtf-all), and/or the default version of OpenSpeedShop
which now contains both the offline mode of operation and the cbtf mode of operation with
the: (--build-oss) install-tool option.
Typical usages examples are followed by the option descriptions. More examples and explanations
can be found in the Build and Install Guides for CBTF and Open|SpeedShop.
Typical usage example for cluster/PC build:
# Build only the krell-root
./install-tool --build-krell-root
--krell-root-prefix /opt/krellroot_v2.4.0
--with-openmpi /opt/openmpi-1.8.2
# Build cbtf components using the krell-root
./install-tool --build-cbtf-all
--cbtf-prefix /opt/cbtf_v2.4.0
--krell-root-prefix /opt/krellroot_v2.4.0
--with-openmpi /opt/openmpi-1.8.2
--with-cupti /usr/local/cuda-8.0/extras/CUPTI
--with-cuda /usr/local/cuda-8.0
# Build only OSS using the cbtf components and the krell-root
./install-tool --build-oss
--cbtf-prefix /opt/cbtf_v2.4.0
--krell-root-prefix /opt/krellroot_v2.4.0
--openss-prefix /opt/osscbtf_v2.4.0
--with-openmpi /opt/openmpi-1.8.2
--with-cupti /usr/local/cuda-8.0/extras/CUPTI
--with-cuda /usr/local/cuda-8.0
Typical in one command usage example for cluster/PC build:
# Build the krell-root, the cbtf components, and OSS using cbtf instrumentor
./install-tool --build-all
--cbtf-prefix /opt/cbtf_v2.4.0
--krell-root-prefix /opt/krellroot_v2.4.0
--openss-prefix /opt/osscbtf_v2.4.0
--with-openmpi /opt/openmpi-1.8.2
Typical example for building the new Qt4/Qt5 GUI tool/interface:
# Build graphviz for support of building the new cbtf-argonavis-gui
./install-tool --build-graphviz
--krell-root-prefix /opt/graphviz-2.40.1
# Build QtGraph for support of building the new cbtf-argonavis-gui, uses graphviz
./install-tool --build-QtGraph
--krell-root-prefix /opt/QtGraph-1.0.0
--with-graphviz /opt/graphviz-2.40.1
--with-qt /usr/lib64/qt5
# Build new cbtf-argonavis-gui, uses graphviz and QtGraph (installs into /opt/osscbtf_v2.4.0)
./install-tool --build-cbtfargonavisgui
--with-openss /opt/osscbtf_v2.4.0
--with-cbtf /opt/cbtf_v2.4.0
--krell-root-prefix /opt/krellroot_v2.4.0
--with-graphviz /opt/graphviz-2.40.1
--with-QtGraph /opt/QtGraph-1.0.0
--with-boost /opt/boost-1.53.0
--with-qt /usr/lib64/qt5
Option Descriptions:
--krell-root-prefix <directory>
Where <directory> is the location to install the components
needed for building CBTF and OpenSpeedShop
and it's supporting tools and libraries. The default
is /opt/KRELLROOT. It is not recommended to use /usr.
NOTE: This option will override any setting for
the environment variable KRELL_ROOT_PREFIX.
NOTE: This prefix should be used as the install path
when building a specific component (for example:
install-tool --build-libelf --krell-root-prefix /opt/libelf-0.8.13
--cbtf-prefix <directory>
Where <directory> is the location to install CBTF
and it's supporting tools and libraries. The default
is /opt/CBTF. It is not recommended to use /usr.
NOTE: This option will override any setting for
the environment variable CBTF_PREFIX.
--openss-prefix <directory>
Where <directory> is the location to install OpenSpeedShop
and it's supporting tools and libraries. The default
is /opt/OSS. It is not recommended to use /usr.
NOTE: This option will override any setting for
the environment variable OPENSS_PREFIX.
--build-all
Build the krell-root, cbtf components, and openspeedshop
The cbtf-prefix, krell-root-prefix, and the openss-prefix must be specified
--build-cbtf-all
Build only the cbtf components using the existing krell-root
The krell-root-prefix and cbtf-install-prefix must be specified
--build-krell-root
Build only the krell-root. The krell-root-prefix must be specified
--build-onlyosscbtf )
Build only the OpenSpeedShop component for the cbtf instrumentor
using the cbtf and krell-root components.
Specify a target architecture
--target-arch <target>
Where acceptable target values are: cray-xk, cray-xe, bgp, bgq, bgqfe
Specify the compiler to be used for the build, gnu is default
--build-compiler {gnu, intel}
Use these MPI installations when building
--with-mpich <directory>
--with-mpich2 <directory>
--with-mpich2-driver <driver name>
--with-mvapich <directory>
--with-mvapich2 <directory>
--with-mvapich2-driver <driver name>
--with-openmpi <directory>
--with-mpt <directory>
Where <directory> is the installed path to the mpi implementation
to use for MPI support.
Build only the component specified by the build clause.
The krell-root-prefix must be specified and components will
be installed into that krell-root-prefix specified directory path.
--build-autotools
--build-bison
--build-boost
--build-xercesc
--build-binutils
--build-flex
--build-dyninst
--build-elfutils
--build-GOTCHA
--build-libelf
--build-libdwarf
--build-libmonitor
--build-libunwind
--build-llvm-openmp
--build-mrnet
--build-ompt
--build-papi
--build-python
--build-sqlite
--build-symtabapi
--build-cmake
--build-ptgf
--build-ptgfossgui
--build-qcustomplot
--build-graphviz
--build-QtGraph
--build-qt3
--build-vampirtrace
Build the experimental CUDA focused GUI Qt4/Qt5
--build-cbtfargonavisgui
Enable certain configuration options
--enable-bfd-symbol-resolution|--enable-bfd
--enable-debug
Force these components to be built and installed into the krellroot
or OSS install directories.
--force-binutils-build
--force-boost-build
--force-cmake-build
--force-dyninst-build
--force-libelf-build
--force-libdwarf-build
--force-libunwind-build
--force-papi-build
--force-qt3-build
--force-sqlite-build
--force-xercesc-build
--force-ompt-build
Build all of the above by force
--force-all
--skip-binutils-build
--skip-boost-build
--skip-cmake-build
--skip-dyninst-build
--skip-libdwarf-build
--skip-libelf-build
--skip-libunwind-build
--skip-mrnet-build
--skip-ompt-build
--skip-papi-build
--skip-qt3-build
--skip-sqlite-build
--skip-symtabapi-build
--skip-vampirtrace-build
--skip-xercesc-build
Use these non-root or alternative components when building
--with-binutils <directory>
--with-boost <directory>
--with-dyninst <directory>
--with-expat <directory>
--with-graphviz <directory>
--with-libelf <directory>
--with-libdwarf <directory>
--with-libmonitor <directory>
--with-libunwind <directory>
--with-mrnet <directory>
--with-papi <directory>
--with-python <directory>
--with-python-vers <version number>
--with-qt <directory>
--with-qt3 <directory>
--with-sqlite <directory>
--with-symtabapi <directory>
--with-xercesc <directory>
--with-otf <directory>
--with-QtGraph <directory>
--with-vt <directory>
--with-zlib <directory>
Where <directory> is the installed path to the alternative component.
Use these non-root or alternative compute node components when building a cbtf-krell fe
that points to targeted runtimes (cray, mic platforms)
--with-cn-cbtf <directory>
--with-cn-cbtf-krell <directory>
--with-cn-binutils <directory>
--with-cn-dyninst <directory>
--with-cn-libelf <directory>
--with-cn-libdwarf <directory>
--with-cn-libmonitor <directory>
--with-cn-libunwind <directory>
--with-cn-mrnet <directory>
--with-cn-symtabapi <directory>
--with-cn-xercesc <directory>
--with-cn-papi <directory>
--with-cn-boost <directory>
Where <directory> is the installed path to the alternative component.
--with-tls < explicit | implicit >
where the default is implicit
Optional install-tool arguments are needed for configuring O|SS for MPI experiments. If the installation of O|SS is intended to support running MPI specific O|SS experiments, then one or more MPI implementation arguments must be specified. The O|SS install-tool script will build MPI experiment collectors for one or more of these MPI implementations by specifying one or more of the “--with-<mpi implementation> arguments:
For openMPI:
--with-openmpi <openmpi install path>
For mpich:
--with-mpich <mpich install path>
For mpich2:
--with-mpich2 <mpich2 install path>
For SGI mpt1:
--with-mpt <mpt install path>
For mvapich
--with-mvapich <mvapich install path>
For mvapich2:
--with-mvapich2 <mvapich2 install path>
If none of the above arguments are not specified, every non-MPI specific O|SS experiments will be built and execute properly, but the mpi, mpit, mpiotf experiments will not be built. The MPI implementation arguments are not necessary to run pcsamp, usertime, hwc, io, or fpe and variants of those experiments, even on MPI applications. They are only needed for mpi, mpip, and mpit experiment creation because the tool needs to know the MPI data structure definitions to process MPI performance data.
The install-tool script and the O|SS configuration code will operate on those MPI arguments and will configure O|SS to recognize these MPI implementations. When the MPI specific argument collectors are built and installed, users will have the ability to create MPI experiments (mpi, mpip, mpit) and gather performance data for MPI applications built with those specific MPI implementations.
Building the new Qt4/Qt5 based O|SS GUI
There is a new graphical user interface being developed whose initial focus was on the cuda experiment. However, support for all experiments is being added and covers all of the O|SS experiments now. To build this somewhat experimental, but useful tool use the following install-tool command line format:
./install-tool --build-cbtfargonavisgui
--with-openss <path to the O|SS client installation>
--with-cbtf <path to the CBTF installation>
--krell-root-prefix <path to the krell root externals installation>
--with-graphviz <path to graphviz installation directory>
--with-QtGraph <path to QtGraph installation directory>
--with-boost <path to boost installation directory>
--with-qt <path to qt4 or qt5 installation directory>
For example:
./install-tool --build-cbtfargonavisgui
--with-openss /opt/OSS/osscbtf_v2.4.0
--with-cbtf /opt/OSS/cbtf_v2.4.0
--krell-root-prefix /opt/OSS/krellroot_v2.4.0
--with-graphviz /opt/OSS/graphviz-2.40.1
--with-QtGraph /opt/OSS/QtGraph-1.0.0
--with-boost /opt/boost-1.53_0
--with-qt /usr/lib64/qt4
See the O|SS reference/users guide for information on how to use the new graphical user interface. The new GUI command is: openss-gui, while the existing GUI’s command is: openss.
If graphviz is not on installed on the system, this install-tool command can be used to build and install graphviz. For example:
./install-tool --build-graphviz --krell-root-prefix /opt/OSS/graphviz-2.40.1
QtGraph is a component of the new Qt4/Qt5 GUI and also needs to be installed manually at this time. The install-tool line to install QtGraph is as follows:
./install-tool --build-QtGraph --krell-root-prefix /opt/OSS/QtGraph-1.0.0 --with-graphviz /opt/OSS/graphviz-2.40.1 --with-qt /usr/lib64/qt4
Module Files, Dotkits, and softenv files
On most systems, a module file, dotkit file, or softenv file must be created after O|SS is built and that file is then activated/loaded prior to running O|SS. There are examples of each in the Runtime Environment Examples section below.
Install tool example commands from various systems
These examples show optional ways of building the default version of O|SS. The krell root components, the cbtf components, and the O|SS client components are installed into separate install directories allowing the ability to update each individually. One can use the “--build-all” option allows all the pieces to be built with one command where all the install locations are specified. The “--krell-root-prefix”, “--cbtf-prefix”, and “--openss-prefix” options must be specified, so that the build script knows where to install the separate components. Or you can pass the same install prefix for all of the three prefix options.
Generic Laptop or Desktop Platform Installation Examples
Please load the cmake module file or have cmake installed, as it is required for building some root components as well as O|SS and the CBTF components (named cbtf, cbtf-krell, cbtf-argonavis, and cbtf-lanl).
Build only the krell-root
./install-tool --build-krell-root
--krell-root-prefix /opt/krellroot_v2.4.0
--with-openmpi /opt/openmpi-1.8.2
Build cbtf components using the krell-root
./install-tool --build-cbtf-all
--cbtf-prefix /opt/cbtf_only_v2.4.0
--krell-root-prefix /opt/krellroot_v2.4.0
--with-openmpi /opt/openmpi-1.8.2
--with-cupti /usr/local/cuda-6.5/extras/CUPTI
--with-cuda /usr/local/cuda-6.5
Build only OSS using the cbtf components and the krell-root
./install-tool --build-oss
--cbtf-prefix /opt/cbtf_only_v2.4.0
--krell-root-prefix /opt/krellroot_v2.4.0
--openss-prefix /opt/osscbtf_v2.4.0
--with-openmpi /opt/openmpi-1.8.2
--with-cupti /usr/local/cuda-6.5/extras/CUPTI
--with-cuda /usr/local/cuda-6.5
Build everything: the krell-root, the cbtf components, and OSS using cbtf instrumentor
./install-tool --build-all
--cbtf-prefix /opt/cbtf_only_v2.4.0
--krell-root-prefix /opt/krellroot_v2.4.0
--openss-prefix /opt/osscbtf_v2.4.0
--with-openmpi /opt/openmpi-1.8.2
Cray Platform Install Examples
Depending on what the default programming environment setting is, you must swap the PrgEnv-cray or PrgEnv-pgi environment module to PrgEnv-gnu. If you are building for CUDA, then you will have to load the cudatoolkit module file. The cmake tool is also required by the cbtf build and is not loaded by default on many Cray systems. A possible module load/swap scenario prior to invoking the install-tool scripts is as follows:
module swap PrgEnv-pgi PrgEnv-gnu
module load cudatoolkit
module load cmake
Instructions for building O|SS CBTF based versions on Cray platforms
We have moved to a multi-step process for building on platforms where the front-end node and the compute nodes are running different processor sets. By building the compute node O|SS and components with the compilers suggested for running on the compute nodes and by building the front-end node O|SS and components we are able to optimally support the Cray platform.
The high level view is to first build for the compute nodes by building the components needed by O|SS and CBTF (krellroot), then build the CBTF components, and finally build O|SS. After the component node versions are built, then we do a similar set of builds for the front-end components and O|SS and CBTF clients.
Building for the compute nodes
This must be done first because we pass the O|SS and CBTF compute node installation directories as arguments to the front-end install-tool build commands.
Setup up the build environment for building for the compute nodes
Note: that this is an example set of module settings. Newer or older versions of Cray software may require alternative module files to be loaded or unloaded. This is an example from the DOD Cray platform Gordon. It follows closely with most Cray type builds, although there are differences on the versions that are used on a particular system.
Also, note: there is a new interface that appears to be replacing the ALPS package and aprun as the default way to launch applications on the Cray. The CTI interface allows for launching application via native SLURM srun. On systems where the CTI interface is being use, please use the install-tool --use-cti install-tool option and do not pass the –-with-alps option.
Another note: Some Cray systems do not install expat-static package which. causes the MRNet package build to fail. In order to resolve this issue, we added the expat tarball to the external packages in the SOURCES directory. To build this into the krellroot, use: install-tool --build-expat --krell-root-prefix <install location>. Then add --with-expat to the install-tool --build-krell-root command line to resolve this issue.
module unload PrgEnv-pgi PrgEnv-cray PrgEnv-intel
module load PrgEnv-gnu
module unload gcc; module load gcc/4.9.3
export SYSROOT_DIR=/opt/cray/xc-sysroot/default
module load cmake-3.2.2
BASE_IDIR=/p/home/app/PET/pkgs/openss
TOOL_VERS="_v2.4.0.beta1"
KROOT_IDIR=${BASE_IDIR}/krellroot${TOOL_VERS}
CBTF_IDIR=${BASE_IDIR}/cbtf${TOOL_VERS}
OSSCBTF_IDIR=${BASE_IDIR}/osscbtf${TOOL_VERS}
OSSOFF_IDIR=${BASE_IDIR}/ossoff${TOOL_VERS}
PAPI_IDIR=/opt/cray/papi/default
MPICH_IDIR=/opt/cray/mpt/7.3.2/gni/mpich-gnu/5.1
ALPS_IDIR=/opt/cray/alps/default
export cc=gcc
export CC=gcc
export CXX=g++
Build the compute node versions of cbtf, cbtf-krell, cbtf-argonavis, cbtf-lanl
# Build root components runtime only
./install-tool --runtime-only
--target-arch cray --target-shared
--build-krell-root --krell-root-prefix ${KROOT_IDIR}/compute
--with-mpich ${MPICH_IDIR}
--with-papi ${PAPI_IDIR}
--with-alps ${ALPS_IDIR}
./install-tool --build-cbtf-all --runtime-only
--target-arch cray --target-shared
--cbtf-prefix ${CBTF_IDIR}/compute
--krell-root-prefix ${KROOT_IDIR}/compute
--with-mpich ${MPICH_IDIR}
--with-papi ${PAPI_IDIR}
Build the compute node version of krellroot
./install-tool --build-krell-root
--runtime-only
--target-arch cray
--target-shared
--krell-root-prefix /home/jgalaro/krellroot_stable/compute
--with-papi /opt/cray/papi/5.3.1
./install-tool --build-cbtf-all
--runtime-only
--target-arch cray
--target-shared
--cbtf-prefix /home/jgalaro/cbtf_only_stable/compute
--krell-root-prefix /home/jgalaro/krellroot_stable/compute
--with-mpich2 /opt/cray/mpt/7.0.1/gni/mpich2-gnu/48
--with-papi /opt/cray/papi/5.3.1
--with-dyninst /home/jgalaro/compute/dyninst-9.0.0_gcc
Build the compute node versions of O|SS for the CBTF version
Note: There is nothing to build, as this version uses the CBTF compute node components for gathering the data. Those were built in the --build-cbtf-all install-tool build command.
NOTE:
Because the default GNU compilers are too old on some Cray platforms and if we build with the default compilers some of the necessary software components will fail to build. If we can build with the default GNU compilers, then this isn't an issue. No copy or LD_LIBRARY_PATH setup needed.
If a newer compiler was loaded via a module file in order to build the tool, then after building everything you have to copy the libstdc++.so.6 from the compiler directory (because it is not available on the compute nodes) into the compute node install directory (krellroot/lib64, for example) or setup LD_LIBRARY_PATH to point to that location.
Here is an example of how it could be done.
> module list
Currently Loaded Modulefiles:
1) modules/3.2.6.7 2) eswrap/1.0.8 3) cray-mpich/7.0.1 4) cmake-3.2.2 5) gcc/4.8.2
> which gcc
/opt/gcc/4.8.2/bin/gcc
> lsr /opt/gcc/4.8.2//snos/lib64/libstdc++.so.6
0 lrwxrwxrwx 1 root root 19 Aug 8 2014 /opt/gcc/4.8.2//snos/lib64/libstdc++.so.6 -> libstdc++.so.6.0.18
Building for the front-end or login nodes
Setup up the build environment for building for the front-end or login nodes
* module unload PrgEnv-pgi PrgEnv-gnu PrgEnv-cray PrgEnv-intel
* module unload craype-network-gemini craype-mc8
* module load cmake-3.2.2
* module load gcc
* export CXX=g++
* export cc=gcc
Build the front-end node version of krellroot
./install-tool --build-krell-root
--krell-root-prefix /home/jgalaro/krellroot_stable
--with-papi /opt/cray/papi/5.3.1
--with-alps /opt/cray/xe-sysroot/default/usr
--with-mpich2 /opt/cray/mpt/7.0.1/gni/mpich2-gnu/48
Build the front-end node versions of cbtf, cbtf-krell, cbtf-argonavis, cbtf-lanl
Note: The --with-cn-<component> lines have compute node installations as their arguments. We are pointing the front-end build to the compute node runtime libraries in order to run the properly built compute node versions of the runtime libraries.
./install-tool --build-cbtf-all
--runtime-target-arch cray
--cbtf-prefix /home/jgalaro/cbtf_only_stable
--krell-root-prefix /home/jgalaro/krellroot_stable
--with-mpich2 /opt/cray/mpt/7.0.1/gni/mpich2-gnu/48
--with-cn-boost /home/jgalaro/krellroot_stable/compute
--with-cn-mrnet /home/jgalaro/krellroot_stable/compute
--with-cn-xercesc /home/jgalaro/krellroot_stable/compute
--with-cn-libmonitor /home/jgalaro/krellroot_stable/compute
--with-cn-libunwind /home/jgalaro/krellroot_stable/compute
--with-cn-dyninst /home/jgalaro/compute/dyninst-9.0.0_gcc
--with-cn-papi /opt/cray/papi/5.3.1
--with-cn-cbtf-krell /home/jgalaro/cbtf_only_stable/compute
--with-cn-cbtf /home/jgalaro/cbtf_only_stable/compute
--with-binutils /home/jgalaro/krellroot_stable
--with-boost /home/jgalaro/krellroot_stable
--with-mrnet /home/jgalaro/krellroot_stable
--with-xercesc /home/jgalaro/krellroot_stable
--with-libmonitor /home/jgalaro/krellroot_stable
--with-libunwind /home/jgalaro/krellroot_stable
--with-dyninst /home/jgalaro/krellroot_stable
--with-papi /opt/cray/papi/5.3.1
Build the front-end node version of O|SS for the CBTF version
./install-tool --target-arch cray
--build-oss
--openss-prefix /home/jgalaro/osscbtf_stable
--with-cn-cbtf-krell /home/jgalaro/cbtf_only_stable/compute
--krell-root-prefix /home/jgalaro/krellroot_stable
--with-mpich2 /opt/cray/mpt/7.0.1/gni/mpich2-gnu/48
--with-papi /opt/cray/papi/5.3.1
--with-boost /home/jgalaro/krellroot_stable
--with-mrnet /home/jgalaro/krellroot_stable
--with-xercesc /home/jgalaro/krellroot_stable
--with-libmonitor /home/jgalaro/krellroot_stable
--with-libunwind /home/jgalaro/krellroot_stable
--with-dyninst /home/jgalaro/krellroot_stable
--with-libelf /home/jgalaro/krellroot_stable
--with-libdwarf /home/jgalaro/krellroot_stable
--with-binutils /home/jgalaro/krellroot_stable
--cbtf-prefix /home/jgalaro/cbtf_only_stable
Blue Gene Systems (only offline supported) Install Examples
For the Blue Gene systems, it is still a two-step process to build the offline version of O|SS. First build the compute node collectors and runtimes that run on the compute nodes. Note that the compute note installation paths have a “bgq” sub-directory appended to the viewer installation path by convention, not necessity. The location must be different than that of the viewer installation as not to clobber the viewer installation. The builder is responsible for keeping the two installations separate. Next build the O|SS viewer that runs on the front-end node, giving this build the –with-runtime-dir that points to the O|SS compute node install directory. That way the front-end knows where the compute node collectors and runtime libraries are located.
For the Blue Gene builds, you must build offline for the compute node first and then build offline for the front-end node. You may build offline using the krellroot concept or not. Using krellroot allows you to just build the new openspeedshop-2.4 updates when they arrive without building all the krellroot components over again.
Compute node – O|SS collectors and runtimes
Build O|SS collectors and runtimes without using krell root components
./install-tool --build-offline
--runtime-only --target-shared --target-arch bgq
--openss-prefix /usr/global/tools/openspeedshop/oss-dev/bgq/ossoff_v2.4.0/compute
--with-mpich2 /bgsys/drivers/ppcfloor/comm/gcc
Build krell root components for compute node
./install-tool –-runtime-only –-target-shared --target-arch bgq --build-krell-root
--krell-root-prefix /usr/global/tools/openspeedshop/ossdev/bgq/krellroot_v2.4.0/compute
--with-mpich2 /bgsys/drivers/ppcfloor/comm/gcc
Build O|SS collectors and runtimes using krell root components
./install-tool --build-offline
--runtime-only --target-shared --target-arch bgq
--openss-prefix /usr/global/tools/openspeedshop/oss-dev/bgq/ossoff_v2.4.0/compute
--krell-root-prefix /usr/global/tools/openspeedshop/ossdev/bgq/krellroot_v2.4.0/compute
--with-mpich2 /bgsys/drivers/ppcfloor/comm/gcc
Front end node – O|SS viewer
Build O|SS Viewer not using krell root components
./install-tool --build-offline
--openss-prefix /usr/global/tools/openspeedshop/oss-dev/bgq/ossoff_v2.4.0
--with-mpich2 /bgsys/drivers/ppcfloor/comm/gcc
--with-runtime-dir /usr/global/tools/openspeedshop/oss-dev/bgq/ossoff_v2.4.0/compute
Build krell root components
./install-tool --build-krell-root
--krell-root-prefix /usr/global/tools/openspeedshop/oss-dev/bgq/krellroot_v2.4.0
--with-mpich2 /bgsys/drivers/ppcfloor/comm/gcc
Build O|SS viewer using krell root components
./install-tool --build-offline
--openss-prefix /usr/global/tools/openspeedshop/oss-dev/bgq/ossoff_v2.4.0
--krell-root-prefix /usr/global/tools/openspeedshop/oss-dev/bgq/krellroot_v2.4.0
--with-mpich2 /bgsys/drivers/ppcfloor/comm/gcc
–-with-runtime-dir /usr/global/tools/openspeedshop/oss-dev/bgq/ossoff_v2.4.0/compute
Intel MIC (KNL) Platform Install Examples
The builds of O|SS, CBTF and supporting components are done on the Intel MIC KNL login node. Use the generic cluster build examples above.
Intel MIC (KNC) Co-Processor Platform Install Examples
The builds of O|SS, CBTF and supporting components are done on the Intel MIC co-process login node (with Intel MIC specific software: compilers, libraries). This example build scenario is based on experiences from building NASA’s maia system and the NERSC test bed MIC system named babbage.
Instructions for building O|SS CBTF based versions on Intel MIC platforms
We have moved to a multi-step process for building on platforms where the front-end node and the compute nodes are running different processor sets. By building the compute node O|SS and components with the compilers suggested for running on the compute nodes and by building the front-end node O|SS and components we are able to optimally support the Intel MIC platform.
The high level view is to first build for the compute nodes by building the components needed by O|SS and CBTF (krellroot), then build the CBTF components, and finally build O|SS. After the component node versions are built, then we do a similar set of builds for the front-end components and O|SS and CBTF clients.
Building for the compute nodes
This must be done first because we pass the O|SS and CBTF compute node installation directories as arguments to the front-end install-tool build commands.
Setup up the build environment for building for the compute nodes
Build the compute node version of krellroot
./install-tool --build-krell-root
--runtime-only
--target-arch mic
--krell-root-prefix /home/jgalaro/krellroot_stable/compute
Build the compute node versions of cbtf, cbtf-krell, cbtf-argonavis, cbtf-lanl
./install-tool --build-cbtf-all
--runtime-only
--target-arch mic
--cbtf-prefix /home/jgalaro/cbtf_only_stable/compute
--krell-root-prefix /home/jgalaro/krellroot_stable/compute
Build the compute node versions of O|SS for the CBTF version
Note: There is nothing to build, as this version uses the CBTF compute node components for gathering the data. Those were built in the --build-cbtf-all install-tool build command.
Building for the front-end or login nodes
Build the front-end node version of krellroot
./install-tool --build-krell-root
--krell-root-prefix /home/jgalaro/krellroot_stable
Build the front-end node versions of cbtf, cbtf-krell, cbtf-argonavis, cbtf-lanl
Note: The --with-cn-<component> lines have compute node installations as their arguments. We are pointing the front-end build to the compute node runtime libraries in order to run the properly built compute node versions of the runtime libraries.
./install-tool --build-cbtf-all
--runtime-target-arch mic
--cbtf-prefix /home/jgalaro/cbtf_only_stable
--krell-root-prefix /home/jgalaro/krellroot_stable
--with-cn-boost /home/jgalaro/krellroot_stable/compute
--with-cn-mrnet /home/jgalaro/krellroot_stable/compute
--with-cn-xercesc /home/jgalaro/krellroot_stable/compute
--with-cn-libmonitor /home/jgalaro/krellroot_stable/compute
--with-cn-libunwind /home/jgalaro/krellroot_stable/compute
--with-cn-dyninst home/jgalaro/krellroot_stable/compute
--with-cn-cbtf-krell /home/jgalaro/cbtf_only_stable/compute
--with-cn-cbtf /home/jgalaro/cbtf_only_stable/compute
--with-binutils /home/jgalaro/krellroot_stable
--with-boost /home/jgalaro/krellroot_stable
--with-mrnet /home/jgalaro/krellroot_stable
--with-xercesc /home/jgalaro/krellroot_stable
--with-libmonitor /home/jgalaro/krellroot_stable
--with-libunwind /home/jgalaro/krellroot_stable
--with-dyninst /home/jgalaro/krellroot_stable
Build the front-end node version of O|SS for the CBTF version
./install-tool --target-arch cray
--build-oss
--openss-prefix /home/jgalaro/osscbtf_stable
--with-cn-cbtf-krell /home/jgalaro/cbtf_only_stable/compute
--krell-root-prefix /home/jgalaro/krellroot_stable
--with-mpich2 /opt/cray/mpt/7.0.1/gni/mpich2-gnu/48
--with-papi /opt/cray/papi/5.3.1
--with-boost /home/jgalaro/krellroot_stable
--with-mrnet /home/jgalaro/krellroot_stable
--with-xercesc /home/jgalaro/krellroot_stable
--with-libmonitor /home/jgalaro/krellroot_stable
--with-libunwind /home/jgalaro/krellroot_stable
--with-dyninst /home/jgalaro/krellroot_stable
--with-libelf /home/jgalaro/krellroot_stable
--with-libdwarf /home/jgalaro/krellroot_stable
--with-binutils /home/jgalaro/krellroot_stable
--cbtf-prefix /home/jgalaro/cbtf_only_stable
ARM Platform Install Examples
Instructions for O|SS CBTF based versions on ARM platforms
Basically building for the ARM is similar to building for a generic cluster, except that we need to know that we are building for the ARM in order to add some special compiler options (-funwind-tables -fasynchronous-unwind-tables) for unwinding call paths. So, we ask for builders to specify the “--target-arch arm” phrase to the “install-tool” build commands.
Building for the ARM platform
Setup up the build environment for building for the front-end or login nodes
* module load cmake
Build the krellroot - components needed to support building cbtf and O|SS
./install-tool --build-krell-root
--target-arch arm
--krell-root-prefix /home/jgalaro/krellroot_stable
--with-openmpi /home/projects/arm64/openmpi/1.8.2/gnu/4.9.1
--with-qt3 /home/jgalaro/qt3
Build cbtf, cbtf-krell, cbtf-argonavis, cbtf-lanl
./install-tool --build-cbtf-all
--target-arch arm
--cbtf-prefix /home/jgalaro/cbtf_only_stable
--krell-root-prefix /home/jgalaro/krellroot_stable
--with-openmpi /home/projects/arm64/openmpi/1.8.2/gnu/4.9.1
Build O|SS for the CBTF version
./install-tool --build-oss
--target-arch arm
--cbtf-prefix /home/jgalaro/cbtf_only_stable
--krell-root-prefix /home/jgalaro/krellroot_stable
--openss-prefix /home/jgalaro/osscbtf_stable
--with-openmpi /home/projects/arm64/openmpi/1.8.2/gnu/4.9.1
To Run O|SS
Please refer to the Quick Start Guide from the O|SS documentation web-site page:
https://www.openspeedshop.org/wp/documentation/
for a short introduction to how to use O|SS.
For access the User’s Reference Guide from the O|SS documentation web-site page:
https://www.openspeedshop.org/wp/documentation/
These are the best sources for information on how to run O|SS.
Environment Setup
OPENSS_PLUGIN_PATH
This environment variable specifies where the *O|SS* main program will look for experiment plugins. This is in addition to the normal search path, which is "<installdir>/lib{64}/openspeedshop” Prior to O|SS initialization set this variable to the path to your non-default plugins, (e.g. "setenv OPENSS_PLUGIN_PATH /g2/install/lib/openspeedshop)
LD_LIBRARY_PATH
This environment variable points to the directories where O|SS component libraries and O|SS libraries are installed.
Set this environment variable to <installdir>/lib{64} (e.g. "setenv LD_LIBRARY_PATH <installdir>/lib{64}:$LD_LIBRARY_PATH)
DYNINSTAPI_RT_LIB
This environment variable points to the directory where the Dyninst (dynamic runtime library) component runtime library is installed. This library is used in O|SS to detect loops and create per-loop statistic performance information available in the O|SS views.
Set this environment variable to
<installdir for dyninst>/lib{64}/libdyninstAPI_RT.so
For example:
setenv DYNINSTAPI_RT_LIB <installdir for dyninst>/lib{64/libdyninstAPI_RT.so
OPENSS_MPI_IMPLEMENTATION (for offline mode of operation)
This environment variable specifies the MPI implementation being used by the MPI application whose performance is being analyzed. Should be set to one of the currently supported MPI implementations: