ARTPARK’s Robotics Challenge 2021-22, required a robot to demonstrate janitorial tasks that would be typically performed in a washroom. The tasks were limited to clearing any rubbish that may be on the floor followed by cleaning the washbasin and the washbasin counter using a sanitizing liquid finally, the robot would also mop the floor of the restroom. The robot was given a brief opportunity to scout out the restroom area so that it can assess the environment and create a representation of it for planning and navigation purposes
- Worked as a part of team Giga Robotics under the guidance of Juan Miguel Jimeno (Creator of LinoRobot and Champ), Prateek Nagras (CEO, Acceleration Robotics), Dr.K V Gangadharan (Professor, NIT Karnataka). A budget of 5000$ was provided to realise the robot from simulation to hardware
- Designed and built a janitorial robot with a mobile base and a robotic arm with Autonomous Navigation Capabilities using ROS1. The robot used a Lidar and a depth camera for obstacle avoidance and navigation. An object detection pipeline using Yolov4 was implemented in order to identify trash on the floor
- Prior to the hardware round a dockerized submission of the gazebo simulation demonstrating the solution was implemented
- Link to competion Video : click here
- Competion Photos : click here
- Link to robot's ros packages : click here
- news articles : MTI News, Times of India, The Hindu
The robot has a footprint of 45x45 cm. The robot has 4 mecanum wheels, which enable holonomic motion. The robot has an RPLidar A1 sensor laser sensor which is present in the base of the robot. The lidar is mainly used for autonomous navigation. On the base, the robot has a 6 DoF robotic arm which is used to pick and place trash, spray and wipe. The gripper of the arm has an Intel D435 camera mounted on it. This is used for pose estimation of various items in the washroom and for 3D obstacle avoidance.
The camera on the arm is used to detect the trash as soon as it enters the washroom. Then, the robot moves to the detected trash items. Once it gets to a trash item, it detects it again to increase precision and accuracy. It then gets posoition of the trash item (the camera is a depth camera).
The robot is equipped with a mop below its base, which can be lifted and lowered, and is used to clean the markings on the floor.
The gripper of the robot is equipped with a nozzle, which is used to spray sanitization liquid on the counter.
The arm grips the sponge cleaner and wipes the sanitization liquid on the counter.
For the perception capabilities required for some of the above tasks, custom datasets trained on the YOLOv4 Object Detection architecture, which is a state of the art object detection model, was used along with an infernence pipeline designed using OpenCV. The datasets used for training were created from images taken by us and a few images from the simulation environment. The images were annotated and converted to the training format (which was YOLO Darknet in this case) using an online tool called Roboflow. They provide an interface to upload images and create datasets suited to a particular training format. The two datasets used in a our projects are the trash-marking dataset and the dustbin dataset.
git clone https://github.com/jaimandal10/artpark_robotics_challenge.git
cd artpark_robotics_challenge
. run.sh
- Open a new terminal and run
docker exec -it artpark_workspace_sim_container bash
- Source the workspace
source artpark_workspace/devel/setup.bash
- Make scripts executable
chmod +x artpark_workspace/src/GigaRoboticsArtpark/apbot_nav/scripts/*
chmod +x artpark_workspace/src/GigaRoboticsArtpark/apbot_description/scripts/*
- Spawn trash
rosrun apbot_description trash_spawner.py
- Spawn dustbins
rosrun apbot_description dustbin_spwaner.py
- Spawn markings
rosrun apbot_description marking_spawner.py
- Open a new terminal and run
docker exec -it artpark_workspace_sim_container bash
- Source the workspace
source artpark_workspace/devel/setup.bash
- Spawn robot
Please do not move the robot after launching(This affects the map building as it starts as soon as the robot is launched). Please specify the desired launch coordinates in the terminal while launching. You can use the image as reference to estimate the coordinates of your desired location.
roslaunch apbot_description robot.launch x:="2.5" y:="-0.8" yaw:="-3.14"
- Open a new terminal and run
docker exec -it artpark_workspace_sim_container bash
- Source the workspace
source artpark_workspace/devel/setup.bash
- Start run (all tasks)
rosrun apbot_nav main_sequence.py
- To perform only markings cleaning and trash pick and place
rosrun apbot_nav trash_markings_sequence.py
- To perform only spraying and wiping sequence
rosrun apbot_nav spray_sequence.py