Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR BUILDING EXTENSION '_prroi_pooling' #404

Open
Wei-Peng-fei opened this issue Aug 28, 2023 · 4 comments
Open

ERROR BUILDING EXTENSION '_prroi_pooling' #404

Wei-Peng-fei opened this issue Aug 28, 2023 · 4 comments

Comments

@Wei-Peng-fei
Copy link

c++ -MMD -MF prroi_pooling_gpu.o.d -DTORCH_EXTENSION_NAME=_prroi_pooling -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/site-packages/torch/include -isystem /home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/site-packages/torch/include/TH -isystem /home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /home/isrl3090m/anaconda3/envs/ubuntu-pytracking/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/isrl3090m/E_Disk/wpf/pytracking/ltr/external/PreciseRoIPooling/pytorch/prroi_pool/src/prroi_pooling_gpu.c -o prroi_pooling_gpu.o 
In file included from /home/isrl3090m/E_Disk/wpf/pytracking/ltr/external/PreciseRoIPooling/pytorch/prroi_pool/src/prroi_pooling_gpu.c:17:
/home/isrl3090m/E_Disk/wpf/pytracking/ltr/external/PreciseRoIPooling/pytorch/prroi_pool/src/prroi_pooling_gpu_impl.cuh:1:1: error: expected unqualified-id before ‘.’ token
    1 | ../../../src/prroi_pooling_gpu_impl.cuh
      | ^
/home/isrl3090m/E_Disk/wpf/pytracking/ltr/external/PreciseRoIPooling/pytorch/prroi_pool/src/prroi_pooling_gpu.c: In function ‘at::Tensor prroi_pooling_backward_cuda(const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, int, int, float)’:
/home/isrl3090m/E_Disk/wpf/pytracking/ltr/external/PreciseRoIPooling/pytorch/prroi_pool/src/prroi_pooling_gpu.c:64:5: error: ‘PrRoIPoolingBackwardGpu’ was not declared in this scope; did you mean ‘prroi_pooling_backward_cuda’?
   64 |     PrRoIPoolingBackwardGpu(
      |     ^~~~~~~~~~~~~~~~~~~~~~~
      |     prroi_pooling_backward_cuda
/home/isrl3090m/E_Disk/wpf/pytracking/ltr/external/PreciseRoIPooling/pytorch/prroi_pool/src/prroi_pooling_gpu.c: In function ‘at::Tensor prroi_pooling_coor_backward_cuda(const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, int, int, float)’:
/home/isrl3090m/E_Disk/wpf/pytracking/ltr/external/PreciseRoIPooling/pytorch/prroi_pool/src/prroi_pooling_gpu.c:95:5: error: ‘PrRoIPoolingCoorBackwardGpu’ was not declared in this scope; did you mean ‘prroi_pooling_coor_backward_cuda’?
   95 |     PrRoIPoolingCoorBackwardGpu(
      |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~
      |     prroi_pooling_coor_backward_cuda
/home/isrl3090m/E_Disk/wpf/pytracking/ltr/external/PreciseRoIPooling/pytorch/prroi_pool/src/prroi_pooling_gpu.c: In function ‘void pybind11_init__prroi_pooling(pybind11::module_&)’:
/home/isrl3090m/E_Disk/wpf/pytracking/ltr/external/PreciseRoIPooling/pytorch/prroi_pool/src/prroi_pooling_gpu.c:108:42: error: ‘prroi_pooling_forward_cuda’ was not declared in this scope; did you mean ‘prroi_pooling_backward_cuda’?
  108 |     m.def("prroi_pooling_forward_cuda", &prroi_pooling_forward_cuda, "PRRoIPooling_forward");
      |                                          ^~~~~~~~~~~~~~~~~~~~~~~~~~
      |                                          prroi_pooling_backward_cuda
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1723, in _run_ninja_build
    env=env)
  File "/home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "run_video.py", line 38, in <module>
    main()
  File "run_video.py", line 34, in main
    run_video(args.tracker_name, args.tracker_param,args.videofile, args.optional_box, args.debug, args.save_results)
  File "run_video.py", line 20, in run_video
    tracker.run_video_generic(videofilepath=videofile, optional_box=optional_box, debug=debug, save_results=save_results)
  File "../pytracking/evaluation/tracker.py", line 395, in run_video_generic
    out = tracker.track(frame, info)
  File "../pytracking/evaluation/multi_object_wrapper.py", line 165, in track
    out = self.trackers[obj_id].initialize(image, init_info_split[obj_id])
  File "../pytracking/tracker/dimp/dimp.py", line 84, in initialize
    self.init_classifier(init_backbone_feat)
  File "../pytracking/tracker/dimp/dimp.py", line 574, in init_classifier
    compute_losses=plot_loss)
  File "../ltr/models/target_classifier/linear_filter.py", line 94, in get_filter
    weights = self.filter_initializer(feat, bb)
  File "/home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "../ltr/models/target_classifier/initializer.py", line 164, in forward
    weights = self.filter_pool(feat, bb)
  File "/home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "../ltr/models/target_classifier/initializer.py", line 45, in forward
    return self.prroi_pool(feat, roi1)
  File "/home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "../ltr/external/PreciseRoIPooling/pytorch/prroi_pool/prroi_pool.py", line 28, in forward
    return prroi_pool2d(features, rois, self.pooled_height, self.pooled_width, self.spatial_scale)
  File "../ltr/external/PreciseRoIPooling/pytorch/prroi_pool/functional.py", line 44, in forward
    _prroi_pooling = _import_prroi_pooling()
  File "../ltr/external/PreciseRoIPooling/pytorch/prroi_pool/functional.py", line 33, in _import_prroi_pooling
    verbose=True
  File "/home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1136, in load
    keep_intermediates=keep_intermediates)
  File "/home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1347, in _jit_compile
    is_standalone=is_standalone)
  File "/home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1452, in _write_ninja_file_and_build_library
    error_prefix=f"Error building extension '{name}'")
  File "/home/isrl3090m/anaconda3/envs/ubuntu-pytracking/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1733, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error building extension '_prroi_pooling'

GPU:NVIDIA GeForce RTX 3090
@Destroy95
Copy link

Same error . Have you solved this problem?

@moooises
Copy link

You might need to write in your code this line.
os.environ['CUDA_HOME']="/usr/local/cuda-11.8"

Change the string to you cuda path and It might work.

@AbdallahOmarAhmed
Copy link

You might need to write in your code this line. os.environ['CUDA_HOME']="/usr/local/cuda-11.8"

Change the string to you cuda path and It might work.

where to write the this line of code pls

@moooises
Copy link

At the beginning of the code.
You need to import the os module first.

import os
os.environ['CUDA_HOME']="/usr/local/cuda-11.8"

I wrote it in the Tracker.py file, but it should work in any other file as long as you access that file.
You can also do

export CUDA_HOME=/usr/local/cuda-11.8

but you will need to do it every time you access the CMD to execute the file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants