Once Co-Pilot is started, the Pi camera will continuously capture video at 1120x624 at 20fps, images will be captured during recording via the video port at 5Hz. The image is then resized to 600x300 as the input for detecting traffic light and classifying its state.
For achieving this, two neural networks are used, a detection net to extract the locations of the traffic lights in the image and a classification net for classifying their states. For detection I used a off-the-shelf pre-trained SSD model, since it is already in the format of edgetpu tflite, it directly runs on Coral accelerator. The input size of the SSD model is only 300x300, therefore the detection net needs to be applied twice to cover the whole image size. With the help Coral it takes ~150ms for the two inferences to complete. However, if we were running the inference purely on CPU it would take 2.3 sec, which is not acceptable for real-time application.
Given the bounding box locations of the detected traffic lights, they are resized to 32x16 and then feed into a custom trained classification net. Its output consists of the probabilities of 11 categories, namely [green, red, yellow, red_yellow, green_left, red_left, green_right, red_right, pedestrian, side, none].
The classification net is a light weighted CNN which has similar architecture like LeNet. The classification net also runs on Coral TPU.
In the end, a tracker is implemented using Kalman filter and Hungarian algorithm to keep track only the relevant traffic light for the driver. Certain pre-recorded voice alert will be played accordingly based on the state of the relevant traffic light.
#on rpi
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
apt-get update
apt-get install -y libedgetpu1-std
apt-get install -y python3-pycoral
apt-get install -y python3-tflite-runtime
python3 -m pip install -r requirements_pi.txt
sudo apt-get install libsdl2-mixer-2.0-0 libsdl2-2.0-0
git clone https://github.com/xeonqq/co-pilot.git
cd co-pilot
python3 -m src.main --ssd_model models/ssd_mobilenet_v2_coco_quant_no_nms_edgetpu.tflite --label models/coco_labels.txt --score_threshold 0.3 --traffic_light_classification_model models/traffic_light_edgetpu.tflite --traffic_light_label models/traffic_light_labels.txt --blackbox_path=./
I use superviser to start co-pilot at RPI boot up.
Once you’ve SSH’d into your Pi, type “alsamixer”. This will bring up an interface within the terminal which will allow you to set the volume of the Raspberry Pi. Simply press the up and down arrow keys to either increase or decrease the volume. When you are done, press ESC.
# under repo root folder
python3 -m pytest
# or
python3 -m tests.test_detection
python3 -m tests.test_classification
Build and run docker container
./build.sh
./linux_run.sh
In docker container
cd workspace
python3 -m src.reprocess --ssd_model models/ssd_mobilenet_v2_coco_quant_no_nms_edgetpu.tflite --label models/coco_labels.txt --score_threshold 0.3 --traffic_light_classification_model models/traffic_light_edgetpu.tflite --traffic_light_label models/traffic_light_labels.txt --blackbox_path=./ --video recording_20210417-090028.h264.mp4 --fps 5
Both main and reprocess can be run without Coral TPU by specifying –cpu option.
MACHINE LEARNING