Thermal Flight Test 1
July Update
After a several month hiatus of moving and capping off other projects, I finally made some small progress on the wildlife management platform. The ground station bottom panel has been manufactured and mounted to the case. I repurposed an old Beelink minipc to ubuntu and set up the necesary WFB-NG connection configuration and additional pipelines necesary for the video downlink. Its been a while since a flight test, so I was able to put some time on the drone and test the video downlink stability / vibration issues associated with the mounts for the downward facing and thermal cameras. For the time being, the Flir is on a simple 3d printed mount but will eventually be mounted via gimbal.
Thermal Pipeline
Ultimately all video processing needs to be done on device so that a feedback loop in the control architecture can inform decisions made by the model. For the first test, I added object detection via basic contours that look at threshold differences to box potential objects of interest. Testing this method indoors actually worked surprisingly well, but I found that during a hot day in the summer this approach is overly-simplistic. It only takes into account the ‘hot’ targets, so anything lower temperature than the ground or other background will not be detected even when there is a clear threshold change.
YOLO
After verifying basic functionality, I moved towards more standard models with a good track record. Moving forward, there are two options.
- Use a edge TPU to offload processing
- Mount a Jetson Nano and run the models there.
Option 1 is optimal and theoretically the simpler option, but constrains the capabilities of the models I can use, as well as potential scalability of future development. If I end up needing to use the Jetson, It will probably be worth replacing the current flight computer (RPI4) entirely with the jetson, or interfacing between them in some way. Unfortunately the current flight computer already uses the ethernet slot to communicate with the flight controller telemetry data, so alternative methods for reliable cross-computer communication would have to be explored.
I am in the process of testing YOLOV5 nano compiled to .tflight (to run on the TPU) and have so far gotten significantly better results than the initial approach (surprise surprise).
The next major steps will be to test the stable range for video transmission and effective altitude needed to accurately detect objects on the ground and in the future classify. Then it will be data collection, classification model training, and decision algorithm based on the classification results.