Source: VILLANOVA UNIV submitted to
MONITORING BOMBUS, FLORAL ABUNDANCE AND DIVERSITY USING COMPUTER VISION AND AUTONOMOUS AERIAL VEHICLES
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
NEW
Funding Source
Reporting Frequency
Annual
Accession No.
1032170
Grant No.
2024-67014-42301
Project No.
PENW-2023-08412
Proposal No.
2023-08412
Multistate No.
(N/A)
Program Code
A1113
Project Start Date
May 1, 2024
Project End Date
Apr 30, 2027
Grant Year
2024
Project Director
Margapuri, V.
Recipient Organization
VILLANOVA UNIV
800 LANCASTER AVENUE
VILLANOVA,PA 19085
Performing Department
(N/A)
Non Technical Summary
Pollinators play a crucial role in global biodiversity by providing vital ecosystem services to agricultural productivity and wild flora preservation. However, their continual decline in population across the globe threatens many pollinator services with impacts affecting maintenance of wild plant diversity, ecosystem stability, crop production, food security, and human welfare. Our project will directly address two of the United States Deparment of Agriculture's Pollinator Health priorities, including "factors that influence the abundance, diversity and health of pollinators" and "development and evaluation of innovative tools and management practices that would likely be adopted by stakeholders to ensure healthy pollinators," by using unmanned aerial vehicles (UAV) and artificial intelligence (AI) tools to provide a critical understanding of the relationship between pollinator abundance and floral presence. The proposed research aids in determining the abundance and diversity of both pollinator and flower species across large swathes of agricultural fields, prairies, and landmasses by providing accurate AI image detectors and classifiers, trained from expertly collected and curated datasets of both pollinator and flower species. We will also produce a free, public, web application with documented user guides for experts and non-experts alike that leverages the AI detection and classification models to detect, classify, and quantize both Bombus and flower species from uploaded images and target areas. By matching pollinators with their host flower species and evaluating best practices for utilizing UAVs in acquiring real-time monitoring information, researchers, land managers, and environmental watchdogs have the tools needed to observe and make better informed decisions regarding pollinator health and conservation.
Animal Health Component
0%
Research Effort Categories
Basic
20%
Applied
50%
Developmental
30%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
13630851130100%
Goals / Objectives
The project aims to accomplish the following goals:Construct a labeled and annotated image repository of 150-200 common prairie flowering plant species, with at least 1,000 images per species.Develop flower detection and floral species classification algorithms using annotated images from the constructed image repository along with state-of-the-art vision algorithms. 3. Enhance existing Bombus image datasets and develop project-specific detection and classification algorithms. 4. Develop best practice guidelines to use UAVs to acquire imagery. 5. Develop a web application for researchers and enthusiasts to input UAV-derived imagery to use our algorithms and characterize bumble bee and floral communities.
Project Methods
Flora Dataset Development:We will gather images of 150 to 200 flowering plant species common to tallgrass prairies of the Midwestern US and Great Plains. These images will be used to train our flower species classifier from new images collected in the field using UAV flights. We will supplement those images with crowdsourced images from online image repositories, such as the Global Biodiversity Information Facility (GBIF). It will be important to collect images from UAV flights that will represent the context of images that end users will gather to analyze. Supplementing with images from GBIF, which will be comprised of the same species but in different image resolutions, lighting, and angles, etc., will help increase the performance and generality of our classification algorithm so it can be used for multiple purposes. To label images of individual flowering units with the correct species name, we will completely sample select 1×1 m2 locations within prairies, identifying each species in flower. We will then acquire images from UAV flights over these locations. This will ensure that all images are correctly labeled with the species name. From prior sampling, we know the spatial distributions and relative abundances of tallgrass prairie flowering plants at the Konza Prairie and other locations in Northeastern (NE) Kansas Flower detection and classification model development:Localizing each flower unit in an image and identifying them will occur in a two-step process. First, we will use a flower detection algorithm to automatically crop out flowers in an image. Each cropped flowering unit will then be passed through our classification algorithm to identify the species. We will develop convolutional neural network models using the Python programming language and explore both Tensorflow and PyTorch as potential backends.Flower detection and classification will involve training a YOLOv8 (or higher) model and EfficientNetV2 model respectively. Training the models will involve splitting the overall flora image repository into three parts: training, validation, and testing. A general rule of thumb for dataset splitting into three parts is 60/20/20, i.e., 60% of the dataset will be used for training the YOLOv8 model, 20% used for validating (e.g., testing model performance after each training epoch), and 20% used for testing (e.g., test the model's predictive performance on images it has never seen before). Training will be performed over a specified number of training runs (epochs). ?We will assess model performance based on standard metrics including class loss, bounding box loss, accuracy, precision, recall, F1-score, and mean average precision (mAP) for each data split. For training, we will also observe run time for time taken to train. We will place a higher priority on test set metrics as it's the best indicator for overall model performance because results are based on images not seen in the training or validation process.Bumble bee detection and classification models:We will use images from the flora dataset to train bumble a bee detector on real-world (project-specific) data that represents the kind of images we will need to work with to locate/ID bees. Unmanned Aerial Vehicle (UAV) flights will be conducted to acquire any images that focus on areas with bumble bees as necessary. We will train a YOLOv8 model to act as the bee detector with the bee dataset. For training, our process will be identical to our process outlined in the Flower Detection and Classification Model Development section.Best practice guidelines for UAV use:we will evaluate different UAV flight and imaging parameters in combination with our computer vision models so that we can develop a set of guidelines for effective UAV-based monitoring of floral and bumble bee communities. We will use the DJI Mavic 3T, a user-friendly Enterprise-grade UAV designed for high-resolution mapping. The Mavic 3T provides 3 cameras mounted on a stabilized gimbal: a 48 mp camera with a wide-angle lens and 56x hybrid zoom, a 12 mp camera with a 162 mm telephoto lens, and a thermal camera. The sub-millimeter resolution of the on-board cameras will help capture the finer details of bumble bees and flora that AI-based models require for accurate detection and classification. Furthermore, the Mavic 3T provides advanced features, such as mission control, terrain following, obstacle avoidance, and does not require users to establish ground control points for accurate mapping. The mission control feature enables us to fly the UAV autonomously following a set of waypoints and specifications for speed, altitude, and image overlap to ensure safe and stable flights. Based on Mavic 3T specifications, we expect to be able to fully image a 50 × 50 m2 area and provide vision model inference much faster than traditional methods of sampling floral and bumble bee communities in the same area.We will evaluate how to fly UAVs to optimize speed and image quality for flower/bee detection and classification. For example, we will test how the combination of UAV camera type, zoom level, and altitude affect ground sampling distance (GSD) or the real-world dimensions of pixels in an image. To help reduce flight time we could reduce the overlap of our flight paths, but this could reduce the precision of our mapping. We will test the way these parameters affect the ability to quickly image an area for accurate computer vision model predictions.Web application development:We will develop a web client that will allow users and stakeholders to access the AI-based image classifiers for flora and pollinator species. It will be developed using Python3 and React programming languages to ensure that the development stack is current.The web client will provide the following features to the users:1. Image preprocessing, including the construction of an orthomosaic and tiling of images.2. Detection and classification of bumble bee and flower species within images.3. Quantifying the number and spatial distribution of bumble bee and flower species within the target area.4. Recommendations for UAV flights in field sites to capture images.5. Access annotated image datasets for bumble bee and flower species.