Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
146 changes: 146 additions & 0 deletions GEMstack/knowledge/detection/bevfusion_instructions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
# BEVFusion Set Up Instructions
These instructions were tested on T4 g4dn.xlarge AWS instances with Arara Ubuntu 20.04 DCV images.

## Set Up Instructions for Cuda 11.3
### Set Up your Nvida Driver
```
sudo apt-get update
sudo apt-get install -y ubuntu-drivers-common
ubuntu-drivers devices
sudo apt-get install -y nvidia-driver-565
sudo reboot
```

### Check to make sure that your nvidia driver was set up correctly:
```
nvidia-smi
```

### Install Cuda 11.3
```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.3.0/local_installers/cuda-repo-ubuntu2004-11-3-local_11.3.0-465.19.01-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2004-11-3-local_11.3.0-465.19.01-1_amd64.deb # This never copy pastes right. Just manually type
sudo apt-key add /var/cuda-repo-ubuntu2004-11-3-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda-11-3
```

### Manually modify your bashrc file to include Cuda 11.3
```
sudo nano ~/.bashrc
```

Add the next 2 lines to the bottom of the file:
```
export PATH=/usr/local/cuda-11.3/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-11.3/lib64:$LD_LIBRARY_PATH
```

Ensure you source your bashrc file:
```
source ~/.bashrc
nvidia-smi
```

### Set Up Miniconda
```
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash ~/Miniconda3-latest-Linux-x86_64.sh
source ~/.bashrc
```

### Set Up mmdetection3d:
```
conda create --name openmmlab python=3.8 -y
conda activate openmmlab
conda install pytorch=1.10.2 torchvision=0.11.3 cudatoolkit=11.3 -c pytorch
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4, <2.2.0'
mim install 'mmdet>=3.0.0,<3.3.0'
pip install cumm-cu113
pip install spconv-cu113
git clone https://github.com/open-mmlab/mmdetection3d.git -b dev-1.x
cd mmdetection3d
pip install -v -e .
python projects/BEVFusion/setup.py develop
```

### Run this afterwards to verify BEVFusion has been set up correctly:
```
python projects/BEVFusion/demo/multi_modality_demo.py demo/data/nuscenes/n015-2018-07-24-11-22-45+0800__LIDAR_TOP__1532402927647951.pcd.bin demo/data/nuscenes/ demo/data/nuscenes/n015-2018-07-24-11-22-45+0800.pkl projects/BEVFusion/configs/bevfusion_lidar-cam_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d.py ~/Downloads/bevfusion_lidar-cam_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d-5239b1af.pth --cam-type all --score-thr 0.2 --show
```

## Set Up Instructions for Cuda 11.1
### Set Up your Nvida Driver
```
sudo apt-get update
sudo apt-get install -y ubuntu-drivers-common
ubuntu-drivers devices
sudo apt-get install -y nvidia-driver-565
sudo reboot
```

### Check to make sure that your nvidia driver was set up correctly:
```
nvidia-smi
```

### Install Cuda 11.3
```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.1.0/local_installers/cuda-repo-ubuntu2004-11-1-local_11.1.0-455.23.05-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2004-11-1-local_11.1.0-455.23.05-1_amd64.deb
sudo apt-key add /var/cuda-repo-ubuntu2004-11-1-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda-11-1
```

### Manually modify your bashrc file to include Cuda 11.3
```
sudo nano ~/.bashrc
```

Add the next 2 lines to the bottom of the file:
```
export PATH=/usr/local/cuda-11.1/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-11.1/lib64:$LD_LIBRARY_PATH
```

Ensure you source your bashrc file:
```
source ~/.bashrc
nvidia-smi
```

### Set Up Miniconda
```
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash ~/Miniconda3-latest-Linux-x86_64.sh
source ~/.bashrc
```

### Set Up mmdetection3d:
```
conda create --name openmmlab python=3.8 -y
conda activate openmmlab
pip install torch==1.10.0+cu111 torchvision==0.11.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4, <2.2.0'
mim install 'mmdet>=3.0.0,<3.3.0'
pip install cumm-cu111
pip install spconv-cu111
git clone https://github.com/open-mmlab/mmdetection3d.git -b dev-1.x
cd mmdetection3d
pip install -v -e .
python projects/BEVFusion/setup.py develop
```

### Run this afterwards to verify BEVFusion has been set up correctly:
```
python projects/BEVFusion/demo/multi_modality_demo.py demo/data/nuscenes/n015-2018-07-24-11-22-45+0800__LIDAR_TOP__1532402927647951.pcd.bin demo/data/nuscenes/ demo/data/nuscenes/n015-2018-07-24-11-22-45+0800.pkl projects/BEVFusion/configs/bevfusion_lidar-cam_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d.py ~/Downloads/bevfusion_lidar-cam_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d-5239b1af.pth --cam-type all --score-thr 0.2 --show
```
109 changes: 109 additions & 0 deletions GEMstack/onboard/perception/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
# Perception Team
This folder contains code that is used to detect objects in 3D space and subsequently notify other GEMstack components of the detected objects. It is split up into 3 main areas: Pedestrian detection code created at the beginning-middle of the course, Cone Detection code that is used to detect cones (optionally with orientation), and Sensor Fusion code which fuses YOLO + painted lidar data and PointPillars 3D bounding boxes to detect pedestrians.

## Cone Detection
A YOLO model was trained to detect the orientations of traffic cones in 3D space to support the Vertical Groups.

### Relevant Files
- cone_detection.py
- perception_utils.py


## Sensor Fusion
To improve the quality of the detected pedestrians, we decided to fuse detections from multiple modalities to take advantage of the strengths each sensor (camera and lidar in our case) provides. We accomplished this by fusing the 3D bounding box detections of pedestrians generated by YOLO (model which detects pedestrians with camera data) + painted lidar data and PointPillars (model which detects pedestrians with only lidar data).

### Relevant Files
#### Setup files to create PointPillars Cuda 11.1.1 Docker container:
- build_point_pillars.sh
- setup/docker-compose.yaml
- setup/Dockerfile.cuda111

#### Code used to detect pedestrians:
- combined_detection.py
- point_pillars_node.py
- yolo_node.py

#### Code used to analyze the results of detections and extract data from rosbags for further analysis:
- eval_3d_bbox_performance.py
- rosbag_processor.py
- test_eval_3d_bbox_performance.py

### Local Installation Steps for PointPillars Docker Container
#### READ BEFOREHAND:
- Before perfoming installation steps, please make sure you source ALL terminal windows (except for docker terminal window).
```
source /opt/ros/noetic/setup.bash
source ~/catkin_ws/devel/setup.bash
```
- These instructions were written with the assumption that you are running them inside of the outermost GEMstack folder.
- If you have set up issues please read the "Set Up Issues Known Fixes" section at the bottom.

#### Steps:
1. Install Docker
2. Install Docker Compose
3. A bash script was created to handle docker permissions issues and make the set up process simpler:
```
cd GEMstack/onboard/perception
bash build_point_pillars.sh
```
4. Start the container (use sudo if you run into permissions issues)
```
docker compose -f setup/docker-compose.yaml up
```
5. Run roscore on local machine (make sure you source first)
```
roscore
```
6. Run yaml file to start up the CombinedDetector3D GEMstack Component (make sure you source first):
```
python3 main.py --variant=detector_only launch/combined_detection.yaml
```
7. Run a rosbag on a loop (make sure you source first):
```
rosbag play -l yourRosbagNameGoesHere.bag
```

### Vehicle Installation Steps for PointPillars Docker Container
Perform the same setup steps as the above section with the below exceptions:
1. Ensure you source instead with the following command:
```
source ~/demo_ws/devel/setup.bash
```
2. Initialize the sensors:
```
roslaunch basic_launch sensor_init.launch
```
3. Initialize GNSS (if you need it)
```
roslaunch basic_launch visualization.launch
```
4. Do not run a rosbag in Step 8 above (it's not needed since you'll be getting live data from the vehicle)

#### Known Fixes for Set Up Issues
1. If you get a shape error when creating the "results_normal" variable in yolo_node.py, please downgrade your Ultralytics version to 8.1.5 (this is the version used on the car at the time of writing this):
```
pip install 'ultralytics==8.1.5'
```
2. If you run into communication issues with ROS, please make sure you have sourced EVERY terminal window (except for docker window there's no need to):
```
source /opt/ros/noetic/setup.bash
source ~/catkin_ws/devel/setup.bash
```

### Visualization Steps:
Please make sure you source each new terminal window after creating it (local source commands are below):
```
source /opt/ros/noetic/setup.bash
source ~/catkin_ws/devel/setup.bash
```

1. Start rviz:
```
rviz
```
2. Publish a static transform from the map to visualize the published bounding box data:
```
rosrun tf2_ros static_transform_publisher 0 0 0 0 0 0 map currentVehicleFrame
```
3. In Rviz, click "add" in the bottom left corner. In "By display type", under "jsk_rviz_plugins" select BoundingBoxArray.
4. Expand BoundingBoxArray on the left. Under it you will see "Topic" with a blank space to the right of it. Click the blank space (it's a hidden drop down box) and select the BoundingBoxArray topic to visualize
49 changes: 49 additions & 0 deletions GEMstack/onboard/perception/build_point_pillars.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
#!/bin/bash

# Check if the point_pillars_node.py, the helper functions file, and model weights exist
if [ ! -f "point_pillars_node.py" ]; then
echo "ERROR: point_pillars_node.py not found in the current directory!"
echo "Please place your point_pillars_node.py file in the same directory as this script."
exit 1
fi

if [ ! -f "combined_detection_utils.py" ]; then
echo "ERROR: combined_detection_utils.py not found in the current directory!"
echo "Please place your combined_detection_utils.py file in the same directory as this script."
exit 1
fi

if [ ! -f "epoch_160.pth" ]; then
echo "WARNING: epoch_160.pth model weights not found in the current directory!"
echo "Please place your model weights file in the same directory as this script."
echo "Continue anyway? (y/n)"
read -p ">" choice
if [ "$choice" != "y" ] && [ "$choice" != "Y" ]; then
exit 1
fi
fi

echo "Building Point Pillars Docker container..."
export DOCKERFILE=setup/Dockerfile.cuda111

# Using sudo to handle permissions
MY_UID=$(id -u)
MY_GID=$(id -g)

# Attempt to use docker-compose directly, then with sudo if needed (if you uncomment it)
if ! docker compose -f setup/docker-compose.yaml build; then
echo "Uncomment these lines if you wish to use sudo to build container as backup"
# echo "Using sudo to build the container..."
# sudo -E docker compose -f setup/docker-compose.yaml build
fi

# Notify user of how to run the container
echo "Build complete. To start the container, run:"
echo "docker compose -f setup/docker-compose.yaml up"
echo ""
echo "Or with sudo if you have permission issues:"
echo "sudo docker compose -f setup/docker-compose.yaml up"
echo ""
echo "To run in detached mode (background):"
echo "docker compose -f setup/docker-compose.yaml up -d"
echo "Or: sudo docker compose -f setup/docker-compose.yaml up -d"
Loading
Loading