Recipient Organization
AUBURN UNIVERSITY
108 M. WHITE SMITH HALL
AUBURN,AL 36849
Performing Department
Biosystems Engineering
Non Technical Summary
Plant phenotyping is the quantitative assessment of complex plant traits such as health, productivity, architecture, growth and yield. It has a broad range of applications in agriculture including plant breeding, crop scouting and crop inventory. Traditional plant phenotyping mainly relies on human efforts, which is labor-intensive, time-consuming, error-prone and ergonomically poor for field workers. Labor shortages and changing climate continue to challenge the efficiency, profitability, sustainability and resiliency of crop production systems in the US. High-throughput plant phenotyping (HTPP) technologies are needed to improve current plant breeding and crop management practices. In the state of Alabama and across the Southeastern U.S., peanuts and pine trees are crops of high economic importance and high sustainability. The ultimate goals of this project are to 1) automate field-based phenotyping of peanut plants for accelerated breeding, and 2) automate seedling inventory in bareroot pine seedling production. Two HTPP systems powered by artificial intelligence (AI) will be developed and evaluated for the two goals, respectively. For peanut phenotyping, high-resolution multispectral, thermal and depth images of peanut plants will be collected by a crop-row following, ultra-low-altitude drone.Flight campaigns will be conducted in conjunction with peanut breeding trials and drought experiments. For pine seedling inventory, an unmanned ground vehicle equipped with stereo cameras and a GPS unit will be developed to autonomously travel on a seedling bed and examine individual seedlings as they are mechanically exposed.The mobile inventory system will be tested at multiple commercial forest seedling nurseries within the State of Alabama. AI-based data processing pipelines will be developed to characterize plant architecture, health status and yield components at the plant and organ levels. Research findings will be disseminated through professional conferences and producer meetings. The proposed AI-powered HTPP technologies are expected to significantly improve the efficiency and efficacy of traditional crop monitoring practices and alleviate farm labor shortages, leading to increased productivity, profitability and sustainability for the Alabama agriculture and forestry sectors in the long term.
Animal Health Component
10%
Research Effort Categories
Basic
0%
Applied
10%
Developmental
90%
Goals / Objectives
The over-arching goal of this project is to develop an AI-powered vision-based robotic HTPP framework that is easily adaptable to various crop species (e.g., peanut, pine, soybean and cotton) and to different purposes (e.g., plant breeding, crop scouting, and inventory). Given the economic importance of peanut and pine production in Alabama, the specific objectives of this project are to 1) develop and evaluate an AI-powered vision-based aerial robotic HTPP system to support precision peanut breeding, and 2) develop and evaluate an AI-powered vision-based ground robotic system for automated bareroot pine seedling inventory. The specific aims within Objective 1 are to 1) develop a terrain-following UAV-based multi-modal imaging platform; 2) acquire millimeter spatial resolution multi-modal imagery of peanut plants and ground truth measurements at different growth stages; 3) develop and evaluate an AI-driven image processing pipeline to characterize peanut plant architecture, plant stress, and pod count. The specific aims within Objective 2 are to 1) develop an AI-ready ground-based imaging platform for bareroot pine seedling production fields; 2) develop a deep learning-based video processing pipeline that can perform detection, tracking, root collar diameter estimation, shoot height estimation and health status classification for individual seedlings; 3) evaluate the system performance against year, location, pine species, seedlot, growth stage, and travel direction.
Project Methods
Objective 1: Develop and evaluate an AI-powered vision-based aerial robotic HTPP system for precision peanut breeding.A terrain-following UAV-based multi-modal imaging platform will be developed by integrating a stereo 3D camera, a multispectral camera, a thermal camera, a RTK-GPS module, and an edge computer with a hexacopter. A motion planning algorithm based on the 3D sensing and GPS will be developed to maintain a 2m flight altitude above each crop row to obtain georeferenced millimeter spatial resolution RGB, depth, multispectral, and thermal images. Drs. Chen and Sanz-Saez will carry out large-scale peanut breeding trials and small-scale peanut drought experiments, respectively, at the AAES E.V. Smith Research Center. The UAV campaigns will be performed bi-weekly during the growing season and after peanut digging. Manual ground truth data will be collected including final yield and disease rating for the breeding trials, and LAI, leaf-level hyperspectral data and stomatal imprints for the drought experiments. After digging, a 1m long peanut plants will be processed to obtain pod count and harvest index from the drought experiments. An AI-driven image processing pipeline will be developed to characterize peanut plant architecture, plant stress, and pod count after digging. The multi-tasking feature and transfer learning of deep convolutional neural networks (CNNs) will be exploited to realize plant organ (leaf and pod after digging) instance segmentation and tracking in the multi-modal images. Regarding biotic stress, we will focus on peanut leaf diseases. A Mask R-CNN structure will be employed and two additional parallel branches will be added to classify leaf disease symptoms and the severity level, respectively. A pretrained Mask R-CNN model will be modified and fine-tuned on our high-resolution RGB image dataset. As for abiotic stress, we will focus on plant drought response (i.e., stomatal characteristics). Thermal imaging has been used to indirectly quantify stomatal closure and stomatal conductance for different crops in the field. We will train a CNN to perform regression tasks on the thermal images to predict the manually-measured size, density and openness of stomata at the plot level. In addition to plant stress characterization, the peanut pod detection after digging will enable not only pod count but also pod distribution (i.e., clustered or spread out). The number of visible pods after digging can be tested to serve as a proxy for yield estimation. The pipeline will be implemented using the open-source PyTorch deep learning library. In addition to the novel traits, common agronomic traits will also be extracted including plant height, LAI and NDVI (or its variants). The multi-spectral images will be used to reconstruct the hyperspectral signals collected from on the ground. Correlation analysis will be performed to compare the image-derived traits with the ground truth measurements. Additionally, different machine learning models (e.g., random forest, neural network, support vector machine) with the image-derived traits as inputs will be tested to predict complex traits such as yield and harvest index. Machine learning and statistical analyses will be conducted in the statistical program R.Objective 2: Develop and evaluate an AI-powered vision-based ground robotic system for automated bareroot pine seedling inventory. A commercial unmanned ground vehicle (UGV) will be retrofitted to carry a horizontal tool bar to push the pine seedlings over on-the-go such that the woody stems become visible for imaging. Above the tool bar, multiple top-view RGB stereo cameras will be placed to cover the seedling drill rows. The stereo cameras will allow for root collar diameter (RCD) measurement in the 3D space using the depth estimation. To minimize the influence of outdoor lighting conditions, high intensity strobe lights will be used in synchronization with the stereo cameras. A RTK-GPS module and an inertial measurement unit will be used for autonomous navigation and geo-referencing of the video data. A NVIDIA Jetson AGX Xavier developer kit will be employed as the video streaming platform and robot control as it is currently the most powerful edge computer for AI robotics applications. The Robot Operating System (ROS) will be used to develop the navigation control and design a web-based user interface that can be accessed with a mobile device (e.g., smart phone, tablet, laptop, etc.). An CNN model will be developed to perform simultaneous detection, classification, segmentation and tracking of individual seedlings in the stereo video data. The detection task aims to find a bounding box around a seedling. The classification task predicts whether the seedling is healthy or under disease stresses (i.e., Fusiform rust, pitch canker, Rhizoctonia blight, nematodes). The segmentation task delineates the contour of a seedling stem. The tracking task tracks the same seedling from bending to standing so that it is not counted multiple times. The processing pipeline will be built on the state-of-the-art video instance segmentation models. In addition, stereo 3D reconstruction will be efficiently built into the proposed model by fully utilizing the intermediate results in the model. The 3D reconstruction will be used to estimate RCD and seedling height. Dr. Ryan Nadel will design field experiments at three commercial forest tree nurseries. The field design will include two pine species and two seedlots per species. For each seedlot, 25 plots (1 feet long 4 feet wide) will be chosen uniformly distributed across the long nursery beds (500 feet long 4 feet wide per bed). The GPS locations of the 25 plots will be surveyed with a portable RTK-GPS unit for matching the automated georeferenced inventory results. For each plot, a standard counting frame (1 feet long 4 feet wide) will be first laid down and the seedlings within the frame will be manually counted. In addition, 16 seedlings will be tagged with a metallic twist tie around the woody stem at the half seedling height so that they can be identified in the video. After the manual counting and tagging, georeferenced video data with the proposed inventory system will be collected. Finally, all the seedlings within the counting frame will be harvested and their quality parameters, i.e., RCD, shoot height, and health status will be manually measured. The manual measurements from the tagged seedlings will be used for the system evaluation at the individual seedling level. In addition, the means of the seedling quality parameters from the manual and the automated methods will be compared on a per plot basis. The repeatability of the proposed system will be evaluated by comparing the results of three scans of each field in both forward and backward traveling directions. After the plot-level validation, the manual sampling-based method will also be used to estimate the stand counts for four 500-foot long beds (1 bed per seedlot with 2 seedlots per species). In addition, time study of inventory crews will be performed and used to develop a cost benefit comparison with the automated approach. Data collection will be conducted at each of the 3 nurseries 4 times per year (twice for spring inventory and twice for fall inventory).