Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compilation issues under Centos 7 #4

Closed
bforsbe opened this issue Jun 20, 2016 · 6 comments
Closed

Compilation issues under Centos 7 #4

bforsbe opened this issue Jun 20, 2016 · 6 comments

Comments

@bforsbe
Copy link
Contributor

bforsbe commented Jun 20, 2016

Originally reported by: AndrewPurkiss (Bitbucket: AndrewPurkiss, GitHub: Unknown)


I'm attempting to install running under Centos 7. So far, I have come across a couple of issues.

  1. I've installed cmake 2.8.12 from source, as Centos 7 only has 2.8.11. I also tried the following with cmake3
  2. The MPI compilers are not found with cmake as below

$ cmake ../
-- BUILD TYPE set to the default type: 'Release'
-- Setting fallback CUDA_ARCH=35
-- Setting cpu precision to double
-- Setting gpu precision to single
-- Using cuda wrapper to compile....
-- Cuda version is >= 7.5 and single-precision build, enable double usage warning.
CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:108 (message):
Could NOT find MPI_C (missing: MPI_C_LIBRARIES MPI_C_INCLUDE_PATH)
Call Stack (most recent call first):
/usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:315 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake/Modules/FindMPI.cmake:587 (find_package_handle_standard_args)
CMakeLists.txt:168 (find_package)

-- Configuring incomplete, errors occurred!
See also "/home/purkis01/2016/Software/Relion2/relion2-beta/build/CMakeFiles/CMakeOutput.log".

I have tried setting the environment variables
MPI_C_COMPILER /usr/lib64/openmpi/bin/mpicc
MPI_CXX_COMPILER /usr/lib64/openmpi/bin/mpicxx

but the CMakeLists.txt file seems to ignore these and I get
//Path to a program.
MPI_C_COMPILER:FILEPATH=MPI_C_COMPILER-NOTFOUND

in the CMakeCache.txt file

When I enter the mpicc and mpicxx full paths into the CMakeCache.txt file, then
cmake ../
completes fine.

  1. running make -j 8 then fails with a linking error with librelion_gpu_util.so

$ make -j 8
[ 1%] [ 2%] [ 3%] [ 5%] [ 6%] [ 6%] [ 7%] Scanning dependencies of target copy_scripts
Building NVCC (Device) object src/apps/CMakeFiles/relion_gpu_util.dir//gpu_utils/cuda_kernels/./relion_gpu_util_generated_helper.cu.o
Building NVCC (Device) object src/apps/CMakeFiles/relion_gpu_util.dir/
/gpu_utils/./relion_gpu_util_generated_cuda_autopicker.cu.o
Building NVCC (Device) object src/apps/CMakeFiles/relion_gpu_util.dir//gpu_utils/./relion_gpu_util_generated_cuda_benchmark_utils.cu.o
Building NVCC (Device) object src/apps/CMakeFiles/relion_gpu_util.dir/
/gpu_utils/./relion_gpu_util_generated_cuda_backprojector.cu.o
Building NVCC (Device) object src/apps/CMakeFiles/relion_gpu_util.dir//gpu_utils/./relion_gpu_util_generated_cuda_helper_functions.cu.o
Building NVCC (Device) object src/apps/CMakeFiles/relion_gpu_util.dir/
/gpu_utils/./relion_gpu_util_generated_cuda_projector.cu.o
Building NVCC (Device) object src/apps/CMakeFiles/relion_gpu_util.dir//gpu_utils/./relion_gpu_util_generated_cuda_ml_optimiser.cu.o
[ 7%] Built target copy_scripts
[ 8%] Building NVCC (Device) object src/apps/CMakeFiles/relion_gpu_util.dir/
/gpu_utils/./relion_gpu_util_generated_cuda_projector_plan.cu.o
Scanning dependencies of target relion_gpu_util
Linking CXX shared library ../../lib/librelion_gpu_util.so
Error running link command: No such file or directory
make[2]: *** [lib/librelion_gpu_util.so] Error 2
make[1]: *** [src/apps/CMakeFiles/relion_gpu_util.dir/all] Error 2
make: *** [all] Error 2


@bforsbe
Copy link
Contributor Author

bforsbe commented Jun 20, 2016

Original comment by AndrewPurkiss (Bitbucket: AndrewPurkiss, GitHub: Unknown):


Also tried the fix mentioned in issue 3

@bforsbe
Copy link
Contributor Author

bforsbe commented Jun 21, 2016

Original comment by Bjoern Forsberg (Bitbucket: bforsbe, GitHub: bforsbe):


I will try to reproduce this today. There is probably a discrepancy in the link-time handling of libraries similar to issue #3, but which requires a different fix.

@bforsbe
Copy link
Contributor Author

bforsbe commented Jun 21, 2016

Original comment by Stefan Fleischmann (Bitbucket: sfle, GitHub: Unknown):


CentOS is a bit special... Check if you have these packages installed: openmpi openmpi-devel gcc-c++

OpenMPI is then installed but not enabled by default. You enable it before running cmake by running "module load mpi". You might have to log out and back in for the module command to be available. If you run a graphical terminal, check in the settings that it runs the specified shell as login shell.

You can see what the module does by running "module show mpi".

@bforsbe
Copy link
Contributor Author

bforsbe commented Jun 21, 2016

Original comment by AndrewPurkiss (Bitbucket: AndrewPurkiss, GitHub: Unknown):


Hi Stefan,

Thanks for this. I had all the dependencies installed, but hadn't dug around to find the 'module load mpi' command.

Managed to compiled this time, so I can let the users try it out.

Andrew

@bforsbe
Copy link
Contributor Author

bforsbe commented Jun 21, 2016

Original comment by Stefan Fleischmann (Bitbucket: sfle, GitHub: Unknown):


Hi Andrew,

glad I could help. The reason for this way of packaging is that it simplifies installation of multiple MPI implementations.
For example if you have both, openmpi and mpich installed, you can choose which one to use by running either

#!bash
$ module load mpi/mpich-x86_64

or

#!bash
$ module load mpi/openmpi-x86_64

Stefan

@bforsbe
Copy link
Contributor Author

bforsbe commented Jun 21, 2016

Original comment by AndrewPurkiss (Bitbucket: AndrewPurkiss, GitHub: Unknown):


I do actually have both mpi flavours installed, so thank you for the information on pointing to the relevant one.

I was actually expecting CUDA versions to be more of an issue, as I have had to install different versions of other software depending on the hardware capabilities in the past.

@bforsbe bforsbe closed this as completed Jan 26, 2017
@mtuijtel mtuijtel mentioned this issue Dec 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant