use icc and ifort in openmpi

来源:互联网 发布:js改变div高度 编辑:程序博客网 时间:2024/06/16 13:25

On 12/14/2011 12:21 PM, Micah Sklut wrote: 
> Hi Gustav, 

> I did read Price's email: 

> When I do "which mpif90", i get: 
> /opt/openmpi/intel/bin/mpif90 
> which is the desired directory/binary 

> As I mentioned, the config log file indicated it was using ifort, and 
> had no mention of gfortran. 
> Below is the output from ompi_info. It shows reference to the correct 
> ifort compiler. But, yet the mpif90 compiler, still yeilds a gfortran 
> compiler. 

Micah, 

You are confusing the compilers users to build Open MPI itself with the 
compilers used by Open MPI to compile other codes with the proper build 
environment. 

For example, your configure command, 

./configure --prefix=/opt/openmpi/intel CC=gcc CXX=g++ F77=ifort FC=ifort 

Doesn't tell Open MPI to use ifcort for mpif90 and mpif77. It tell the 
build process to use ifort to compile the Fortran sections of the Open 
MPI source code. To tell mpif90 and mpif77 which compilers you'd like to 
use to compile Fortran programs that use Open MPI, you must set the 
environment variables OMPI_F77 and OMPI_F90. To illustrate, when I want 
to use the gnu compilers, I set the following in my .bashrc: 

export OMPI_CC=gcc 
export OMPI_CXX=g++ 
export OMPI_F77=gfortran 
export OMPI_FC=gfortran 

If I wanted to use Intel compilers, swap the above 4 lines for this: 

export OMPI_CC=pgcc 
export OMPI_CXX=pgCC 
export OMPI_F77=pgf77 
export OMPI_FC=pgf95 

You can verify which compiler is set using the --showme switch to mpif90: 

$ mpif90 --showme 
pgf95 -I/usr/local/openmpi-1.2.8/pgi-8.0/x86_64/include 
-I/usr/local/openmpi-1.2.8/pgi-8.0/x86_64/lib -L/usr/lib64 
-L/usr/local/openmpi-1.2.8/pgi/x86_64/lib 
-L/usr/local/openmpi-1.2.8/pgi-8.0/x86_64/lib -lmpi_f90 -lmpi_f77 -lmpi 
-lopen-rte -lopen-pal -libverbs -lrt -lnuma -ldl -Wl,--export-dynamic 
-lnsl -lutil -lpthread -ldl 

I suspect if you run the command ' env | grep OMPI_FC', you'll see that 
you have it set to gfortran. I can verify that mine is set to pgf97 this 
way: 

$ env | grep OMPI_FC 
OMPI_FC=pgf95 

Of course, a simple echo would work, too: 

$ echo $OMPI_FC 
pgf95 

You can also change these setting by editing the file 
mpif90-wrapper-data.txt in your Open MPI installation directory. 

Full details on setting these variables (and others) can be found in the 
FAQ: 

http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0 

--Prentice> -->> barells_at_ip-10-17-153-123:~> ompi_info>                  Package: Open MPI barells_at_ip-10-17-148-204 Distribution>                 Open MPI: 1.4.4>    Open MPI SVN revision: r25188>    Open MPI release date: Sep 27, 2011>                 Open RTE: 1.4.4>    Open RTE SVN revision: r25188>    Open RTE release date: Sep 27, 2011>                     OPAL: 1.4.4>        OPAL SVN revision: r25188>        OPAL release date: Sep 27, 2011>             Ident string: 1.4.4>                   Prefix: /usr/lib64/mpi/gcc/openmpi>  Configured architecture: x86_64-unknown-linux-gnu>           Configure host: ip-10-17-148-204>            Configured by: barells>            Configured on: Wed Dec 14 14:22:43 UTC 2011>           Configure host: ip-10-17-148-204>                 Built by: barells>                 Built on: Wed Dec 14 14:27:56 UTC 2011>               Built host: ip-10-17-148-204>               C bindings: yes>             C++ bindings: yes>       Fortran77 bindings: yes (all)>       Fortran90 bindings: yes>  Fortran90 bindings size: small>               C compiler: gcc>      C compiler absolute: /usr/bin/gcc>             C++ compiler: g++>    C++ compiler absolute: /usr/bin/g++>       Fortran77 compiler: ifort>   Fortran77 compiler abs: /opt/intel/fce/9.1.040/bin/ifort>       Fortran90 compiler: ifort>   Fortran90 compiler abs: /opt/intel/fce/9.1.040/bin/ifort>              C profiling: yes>            C++ profiling: yes>      Fortran77 profiling: yes>      Fortran90 profiling: yes>           C++ exceptions: no>           Thread support: posix (mpi: no, progress: no)>            Sparse Groups: no>   Internal debug support: no>      MPI parameter check: runtime> Memory profiling support: no> Memory debugging support: no>          libltdl support: yes>    Heterogeneous support: no>  mpirun default --prefix: no>          MPI I/O support: yes>        MPI_WTIME support: gettimeofday> Symbol visibility support: yes>    FT Checkpoint support: no  (checkpoint thread: no)>            MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.4.2)>               MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component v1.4.2)>            MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)>                MCA carto: auto_detect (MCA v2.0, API v2.0, Component> v1.4.2)>                MCA carto: file (MCA v2.0, API v2.0, Component v1.4.2)>            MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2)>                MCA timer: linux (MCA v2.0, API v2.0, Component v1.4.2)>          MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4.2)>          MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4.2)>               MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4.2)>            MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4.2)>            MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.4.2)>                 MCA coll: basic (MCA v2.0, API v2.0, Component v1.4.2)>                 MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.4.2)>                 MCA coll: inter (MCA v2.0, API v2.0, Component v1.4.2)>                 MCA coll: self (MCA v2.0, API v2.0, Component v1.4.2)>                 MCA coll: sm (MCA v2.0, API v2.0, Component v1.4.2)>                 MCA coll: sync (MCA v2.0, API v2.0, Component v1.4.2)>                 MCA coll: tuned (MCA v2.0, API v2.0, Component v1.4.2)>                   MCA io: romio (MCA v2.0, API v2.0, Component v1.4.2)>                MCA mpool: fake (MCA v2.0, API v2.0, Component v1.4.2)>                MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.4.2)>                MCA mpool: sm (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA pml: cm (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA pml: csum (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA pml: v (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA bml: r2 (MCA v2.0, API v2.0, Component v1.4.2)>               MCA rcache: vma (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA btl: ofud (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA btl: self (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA btl: sm (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA btl: tcp (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA btl: udapl (MCA v2.0, API v2.0, Component v1.4.2)>                 MCA topo: unity (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA osc: rdma (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA iof: hnp (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA iof: orted (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA iof: tool (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA oob: tcp (MCA v2.0, API v2.0, Component v1.4.2)>                 MCA odls: default (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA ras: slurm (MCA v2.0, API v2.0, Component v1.4.2)>                MCA rmaps: load_balance (MCA v2.0, API v2.0, Component> v1.4.2)>                MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.4.2)>                MCA rmaps: round_robin (MCA v2.0, API v2.0, Component> v1.4.2)>                MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA rml: oob (MCA v2.0, API v2.0, Component v1.4.2)>               MCA routed: binomial (MCA v2.0, API v2.0, Component v1.4.2)>               MCA routed: direct (MCA v2.0, API v2.0, Component v1.4.2)>               MCA routed: linear (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA plm: rsh (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA plm: slurm (MCA v2.0, API v2.0, Component v1.4.2)>                MCA filem: rsh (MCA v2.0, API v2.0, Component v1.4.2)>               MCA errmgr: default (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA ess: env (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA ess: hnp (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA ess: singleton (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA ess: slurm (MCA v2.0, API v2.0, Component v1.4.2)>                  MCA ess: tool (MCA v2.0, API v2.0, Component v1.4.2)>              MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.4.2)>              MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.2)>>>>> On Wed, Dec 14, 2011 at 12:11 PM, Gustavo Correa> <gus_at_[hidden] <mailto:gus_at_[hidden]>> wrote:>>     Hi Micah>>     Did you read Tim Prince's email to you?  Check it out.>>     Best thing is to set your environment variables [PATH,>     LD_LIBRARY_PATH, intel setup]>     in your initialization file, .profile/.bashrc or .[t]cshrc.>>     What is the output of 'ompi_info'? [From your ifort-built OpenMPI.]>     Does it show ifort or gfortran?>>     I hope this helps,>     Gus Correa>>     On Dec 14, 2011, at 11:21 AM, Micah Sklut wrote:>>     > Thanks for your thoughts,>     >>     > It would certainly appear that it is a PATH issue, but I still>     haven't figured it out.>     >>     > When I type the ifort command, ifort does run.>     > The intel path is in my PATH and is the first directory listed.>     >>     > Looking at the configure.log, there is nothing indicating use or>     mentioning of "gfortran".>     >>     > gfortran is in the /usr/bin directory, which is in the PATH as well.>     >>     > Any other suggestions of things to look for?>     >>     > Thank you,>     >>     > On Wed, Dec 14, 2011 at 11:05 AM, Gustavo Correa>     <gus_at_[hidden] <mailto:gus_at_[hidden]>> wrote:>     > Hi Micah>     >>     > Is  ifort in your PATH?>     > If not, the OpenMPI configure script will use any fortran>     compiler it finds first, which may be gfortran.>     > You need to run the Intel compiler startup script before you run>     the OpenMPI configure.>     > The easy thing to do is to source the Intel script inside your>     .profile/.bashrc or .[t]cshrc file.>     > I hope this helps,>     >>     > Gus Correa>     >>     > On Dec 14, 2011, at 9:49 AM, Micah Sklut wrote:>     >>     > > Hi All,>     > >>     > > I have installed openmpi for gfortran, but am now attempting>     to install openmpi as ifort.>     > >>     > > I have run the following configuration:>     > > ./configure --prefix=/opt/openmpi/intel CC=gcc CXX=g++>     F77=ifort FC=ifort>     > >>     > > The install works successfully, but when I run>     /opt/openmpi/intel/bin/mpif90, it runs as gfortran.>     > > Oddly, when I am user: root, the same mpif90 runs as ifort.>     > >>     > > Can someone please alleviate my confusion as to why I mpif90>     is not running as ifort?>     > >>     > > Thank you for your suggestions,>     > >>     > > -->     > > Micah>     > >>     > >>     > > _______________________________________________>     > > users mailing list>     > > users_at_[hidden] <mailto:users_at_[hidden]>>     > > http://www.open-mpi.org/mailman/listinfo.cgi/users>     >>     >>     > _______________________________________________>     > users mailing list>     > users_at_[hidden] <mailto:users_at_[hidden]>>     > http://www.open-mpi.org/mailman/listinfo.cgi/users>     >>     >>     >>     > -->     > Micah Sklut>     >>     >>     > _______________________________________________>     > users mailing list>     > users_at_[hidden] <mailto:users_at_[hidden]>>     > http://www.open-mpi.org/mailman/listinfo.cgi/users>>>     _______________________________________________>     users mailing list>     users_at_[hidden] <mailto:users_at_[hidden]>>     http://www.open-mpi.org/mailman/listinfo.cgi/users>>>>> -- > Micah Sklut>>>> _______________________________________________
原创粉丝点击