Our Design
HARVEST integrates a variety of technologies to carry out the task of giving farmers the data they need to reduce their water waste in a user-friendly way. Our web-based design focuses on simplicity, efficiency, and accessibility so that all farmers can help contribute towards a more sustainable future.
Overview
The first step in generating the AI algorithm involved developing a conceptual outline for the HARVEST functions. The algorithm is modular and integrates key components for data collection, processing, and actionable insights. Specifically, the outline incorporates inputs, functionality, and outputs for data acquisition, data analysis, predictive irrigation modeling, irrigation recommendations, and a feedback loop to enhance performance over time.
Creating the Object Detection Model
We used Roboflow to streamline the creation of a high-quality model. To begin, we collected and uploaded a dataset of 74 grape field images taken from drones. We then annotated these images by placing bounding boxes around the objects of interest and labeling each with high, low, or no grape maturity, along with two additional fields for brush and weed. Choosing the YOLOv8 object detection model architecture, we then exported the dataset and used Roboflow Train to train the model. Initial results indicated precision of 56%, but room for improvement. In the second iteration, we trained the images focusing solely on grapes with no maturity. This iteration yielded an identification precision of greater than 70%, signifying an ability to improve given additional model attention on object detection sensitivity. We plan to continue fine-tuning the model to maximize accuracy.
Deploying the Model
The overall intent is to leverage tools and frameworks optimized for mobile deployment. Once we were done training the object detection model, we deployed the program on a personal mobile device. Specifically, we tested the model against images of grape fields provided to us by multiple farmers. These images were not part of the set used to train the model. The model was able to consistently correctly identify DMZs within the grape field images displayed. Moving forward, we will fine-tune the model based on improving mAP, IoU, and recall and then work to integrate the program onto a larger variety of platforms using Flutter.