Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Est res #13

Merged
merged 15 commits into from
Apr 26, 2022
52 changes: 52 additions & 0 deletions .github/workflows/cicd.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
name: CICD

# on:
# pull_request:
# branches: [main]

on: push

jobs:
lidar_on_docker:
runs-on: self-hosted

steps:

- name: Checkout branch
uses: actions/checkout@v2

- name: build docker image
run: docker build -t lidar_deep_im .

- name: Check code neatness (linter)
run: docker run lidar_deep_im flake8

# - name: unit testing
# run: docker run lidar_deep_im pytest --ignore=actions-runner --ignore="notebooks"

- name: Full module run on LAS subset
run: docker run -v /var/data/CICD_github_assets:/CICD_github_assets lidar_deep_im
# sudo mount -v -t cifs -o user=mdaab,domain=IGN,uid=24213,gid=10550 //store.ign.fr/store-lidarhd/projet-LHD/IA/Validation_Module/CICD_github_assets/B2V0.5 /var/data/CICD_github_assets

# - name: Evaluate decisions using optimization code on a single, corrected LAS
# run: >
# docker run -v /var/data/cicd/CICD_github_assets:/CICD_github_assets lidar_deep_im
# python lidar_prod/run.py print_config=true +task='optimize'
# +building_validation.optimization.debug=true
# building_validation.optimization.todo='prepare+evaluate+update'
# building_validation.optimization.paths.input_las_dir=/CICD_github_assets/M8.0/20220204_building_val_V0.0_model/20211001_buiding_val_val/
# building_validation.optimization.paths.results_output_dir=/CICD_github_assets/opti/
# building_validation.optimization.paths.building_validation_thresholds_pickle=/CICD_github_assets/M8.3B2V0.0/optimized_thresholds.pickle

- name: chekc the user
run: whoami

- name: save the docker image because everything worked
run: docker save lidar_deep_im > /var/data/CICD_github_assets/lidar_deep_im.tar # need writing right

- name: clean the server for further uses
if: always() # always do it, even if something failed
run: docker system prune # remove obsolete docker images (take a HUGE amount of space)



52 changes: 52 additions & 0 deletions .github/workflows/gh-pages
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Workflow name
name: "Documentation Build"

# Event that must trigger the workflow
on:
push: # <- trigger when we call push
branches:
- main # <- but only on main branch
- FixDocAPI # <- also on this branch until documentation is up and running.

jobs:

build-and-deploy:
runs-on: ubuntu-latest

# Task to do when launching the workflow
steps:

# 1. First get the repository source

- name: "Checkout"
uses: actions/checkout@v2

# 2. Sphinx part : install tool and dependencies

- name: "Set up Python"
uses: actions/setup-python@v1
with:
python-version: 3.9.12

# Packages that depend on torch need need to be installed afterwards,
# hence the "requirements_torch_deps.txt file.
- name: "Install Python dependencies"
working-directory: ./docs/
run: |
python3 -m pip install --upgrade pip
pip3 install -r requirements.txt
pip3 install -r requirements_torch_deps.txt


- name: "Build Sphinx Doc"
working-directory: ./docs/
run: |
make html

# 3. Déploiement sur les Github Pages

- name: "Deploy Github Pages"
uses: JamesIves/[email protected]
with:
BRANCH: gh-pages # <- Branch where generated doc files will be commited
FOLDER: docs/build/html/ # <- Dir where .nojekyll is created and from which to deploy github pages.
64 changes: 64 additions & 0 deletions dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
FROM nvidia/cuda:10.1-devel-ubuntu18.04
# An nvidia image seems to be necessary for torch-points-kernel. Also, a "devel" image seems required for the same library

# set the IGN proxy, otherwise apt-get and other applications don't work
ENV http_proxy 'http://192.168.4.9:3128/'
ENV https_proxy 'http://192.168.4.9:3128/'

# set the timezone, otherwise it asks for it... and freezes
ENV TZ=Europe/Paris
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

# all the apt-get installs
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
software-properties-common \
wget \
git \
libgl1-mesa-glx libegl1-mesa libxrandr2 libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2 libxi6 libxtst6 # package needed for anaconda

RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh \
&& /bin/bash ~/miniconda.sh -b -p /opt/conda \
&& rm ~/miniconda.sh

ENV PATH /opt/conda/bin:$PATH

WORKDIR /lidar

# copy all the data now (because the requirements files are needed for anaconda)
COPY . .

# install the python packages via anaconda
RUN conda env create -f bash/setup_environment/requirements.yml

# Make RUN commands use the new environment:
SHELL ["conda", "run", "-n", "lidar_multiclass", "/bin/bash", "-c"]

# install all the dependencies
RUN conda install -y pytorch=="1.10.1" torchvision=="0.11.2" -c pytorch -c conda-forge \
&& conda install pytorch-lightning==1.5.9 -c conda-forge \
&& pip install torch-scatter -f https://data.pyg.org/whl/torch-1.10.1+cpu.html torch-sparse -f https://data.pyg.org/whl/torch-1.10.1+cpu.html torch-geometric \
&& pip install torch-points-kernels --no-cache \
&& pip install torch torchvision \
&& conda install -y pyg==2.0.3 -c pytorch -c pyg -c conda-forge

# the entrypoint garanty that all command will be runned in the conda environment
ENTRYPOINT ["conda", \
"run", \
"-n", \
"lidar_multiclass"]

CMD ["python", \
"-m", \
"lidar_multiclass.predict", \
"--config-path", \
"/CICD_github_assets/parametres_etape1/.hydra", \
"--config-name", \
"predict_config_V1.6.3.yaml", \
"predict.src_las=/CICD_github_assets/parametres_etape1/test/792000_6272000_subset_buildings.las", \
"predict.output_dir=/CICD_github_assets/output_etape1", \
"predict.resume_from_checkpoint=/CICD_github_assets/parametres_etape1/checkpoints/epoch_033.ckpt", \
"predict.gpus=0", \
"datamodule.batch_size=10", \
"datamodule.subtile_overlap=0", \
"hydra.run.dir=/lidar"]

2 changes: 1 addition & 1 deletion lidar_multiclass/predict.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ def predict(config: DictConfig) -> str:

@hydra.main(config_path="../configs/", config_name="config.yaml")
def main(config: DictConfig):
f"""See function {predict.__name__}.
"""See function predict

:meta private:

Expand Down
60 changes: 60 additions & 0 deletions setup.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
[metadata]
project_name = "Segmentation Validation Model"
author = "Charles GAYDON"
contact = "[email protected]"
license_file = LICENSE
description_file = README.md


[isort]
line_length = 99
profile = black
filter_files = True


[flake8]
max_line_length = 99
show_source = True
format = pylint
ignore =
F401 # Module imported but unused
W504 # Line break occurred after a binary operator
F841 # Local variable name is assigned to but never used
F403 # from module import *
E501 # Line too long
E741 # temp ignore
F405 # temp ignore
W503 # temp ignore
F811 # temp ignore
E266 # temp ignore
E262 # temp ignore
W605 # temp ignore
E722 # temp ignore
F541 # temp ignore
W291 # temp ignore
E401 # temp ignore
E402 # temp ignore
W293 # temp ignore

exclude =
.git
__pycache__
data/*
tests/*
notebooks/*
logs/*
/home/MDaab/.local/lib/python3.9/
/home/MDaab/anaconda3/

[tool:pytest]
python_files = tests/*
log_cli = True
markers =
slow
addopts =
--durations=0
--strict-markers
--doctest-modules
filterwarnings =
ignore::DeprecationWarning
ignore::UserWarning