You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am tryinng yo vall a mmdet.mask_rcnn model with backbone resnet50_fpn_1x and passing the PIL Image object, byt then in the call to conver_raw_prediction I get the next error:
ERROR: Traceback (most recent call last):
File "/usr/local/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/usr/local/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/usr/src/app/main.py", line 223, in detection
res = measures.measure('medidas', image_file)
File "/usr/src/app/core/utils/measure_utils.py", line 110, in measure
return self.infer_image(model, model_type, class_map, img_obj, img_path, img_size)
File "/usr/src/app/core/utils/measure_utils.py", line 36, in infer_image
preds = model_type.predict_from_dl(model, infer_dl, keep_images=True, detection_threshold=0.33)
File "/usr/local/lib/python3.9/site-packages/icevision/models/mmdet/common/mask/prediction.py", line 66, in predict_from_dl
return _predict_from_dl(
File "/usr/local/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/icevision/models/utils.py", line 107, in _predict_from_dl
preds = predict_fn(
File "/usr/local/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/icevision/models/mmdet/common/mask/prediction.py", line 31, in _predict_batch
return convert_raw_predictions(
File "/usr/local/lib/python3.9/site-packages/icevision/models/mmdet/common/mask/prediction.py", line 93, in convert_raw_predictions
return [
File "/usr/local/lib/python3.9/site-packages/icevision/models/mmdet/common/mask/prediction.py", line 94, in <listcomp>
convert_raw_prediction(
File "/usr/local/lib/python3.9/site-packages/icevision/models/mmdet/common/mask/prediction.py", line 120, in convert_raw_prediction
keep_masks = MaskArray(np.vstack(raw_masks)[keep_mask])
File "<__array_function__ internals>", line 200, in vstack
File "/usr/local/lib/python3.9/site-packages/numpy/core/shape_base.py", line 296, in vstack
return _nx.concatenate(arrs, 0, dtype=dtype, casting=casting)
File "<__array_function__ internals>", line 200, in concatenate
ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 7 has 3 dimension(s)
I am using the next Dockerfile to run the code:
FROM python:3.9.16
# Install system dependencies
RUN apt-get update && apt-get upgrade && \
apt remove --autoremove nvidia-cuda-toolkit && \
apt remove --autoremove nvidia-* && \
apt-get purge nvidia* && \
apt-get autoremove && \
apt-get autoclean && \
rm -rf /usr/local/cuda* && \
apt update && \
apt-get install -y python-tk && \
pip3 install --upgrade setuptools pip wheel
RUN pip install nvidia-pyindex
RUN pip install nvidia-cuda-runtime-cu11
# Set working directory and copy application files
WORKDIR /usr/src/app
COPY ./app /usr/src/app/
# Install Python dependencies
RUN pip3 install --no-cache-dir torch==1.10.0+cu111 torchvision==0.11.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
RUN pip3 install mmcv-full==1.3.17 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html
RUN pip3 install mmdet==2.17.0 icevision[all]
RUN pip3 install --no-cache-dir uvicorn fastapi pyproj python-multipart sahi==0.10.1 pillow==8.4.0
# Expose port and set command to run the application
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
and the image shape is:
Image /tmp/tmpv3b24q6n/image.jpg: (619, 825, 3)
The text was updated successfully, but these errors were encountered:
I am tryinng yo vall a mmdet.mask_rcnn model with backbone resnet50_fpn_1x and passing the PIL Image object, byt then in the call to conver_raw_prediction I get the next error:
I am using the next Dockerfile to run the code:
and the image shape is:
Image /tmp/tmpv3b24q6n/image.jpg: (619, 825, 3)
The text was updated successfully, but these errors were encountered: