Source: AUBURN UNIVERSITY submitted to NRP
DEVELOPMENT OF AI-POWERED ROBOTIC HIGH-THROUGHPUT PLANT PHENOTYPING SYSTEMS FOR PRECISION PEANUT BREEDING AND FOREST TREE SEEDLING INVENTORY IN ALABAMA
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
COMPLETE
Funding Source
Reporting Frequency
Annual
Accession No.
1023617
Grant No.
(N/A)
Cumulative Award Amt.
(N/A)
Proposal No.
(N/A)
Multistate No.
(N/A)
Project Start Date
Jul 28, 2020
Project End Date
Jul 26, 2024
Grant Year
(N/A)
Program Code
[(N/A)]- (N/A)
Recipient Organization
AUBURN UNIVERSITY
108 M. WHITE SMITH HALL
AUBURN,AL 36849
Performing Department
Biosystems Engineering
Non Technical Summary
Plant phenotyping is the quantitative assessment of complex plant traits such as health, productivity, architecture, growth and yield. It has a broad range of applications in agriculture including plant breeding, crop scouting and crop inventory. Traditional plant phenotyping mainly relies on human efforts, which is labor-intensive, time-consuming, error-prone and ergonomically poor for field workers. Labor shortages and changing climate continue to challenge the efficiency, profitability, sustainability and resiliency of crop production systems in the US. High-throughput plant phenotyping (HTPP) technologies are needed to improve current plant breeding and crop management practices. In the state of Alabama and across the Southeastern U.S., peanuts and pine trees are crops of high economic importance and high sustainability. The ultimate goals of this project are to 1) automate field-based phenotyping of peanut plants for accelerated breeding, and 2) automate seedling inventory in bareroot pine seedling production. Two HTPP systems powered by artificial intelligence (AI) will be developed and evaluated for the two goals, respectively. For peanut phenotyping, high-resolution multispectral, thermal and depth images of peanut plants will be collected by a crop-row following, ultra-low-altitude drone.Flight campaigns will be conducted in conjunction with peanut breeding trials and drought experiments. For pine seedling inventory, an unmanned ground vehicle equipped with stereo cameras and a GPS unit will be developed to autonomously travel on a seedling bed and examine individual seedlings as they are mechanically exposed.The mobile inventory system will be tested at multiple commercial forest seedling nurseries within the State of Alabama. AI-based data processing pipelines will be developed to characterize plant architecture, health status and yield components at the plant and organ levels. Research findings will be disseminated through professional conferences and producer meetings. The proposed AI-powered HTPP technologies are expected to significantly improve the efficiency and efficacy of traditional crop monitoring practices and alleviate farm labor shortages, leading to increased productivity, profitability and sustainability for the Alabama agriculture and forestry sectors in the long term.
Animal Health Component
10%
Research Effort Categories
Basic
0%
Applied
10%
Developmental
90%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
40274102020100%
Knowledge Area
402 - Engineering Systems and Equipment;

Subject Of Investigation
7410 - General technology;

Field Of Science
2020 - Engineering;
Goals / Objectives
The over-arching goal of this project is to develop an AI-powered vision-based robotic HTPP framework that is easily adaptable to various crop species (e.g., peanut, pine, soybean and cotton) and to different purposes (e.g., plant breeding, crop scouting, and inventory). Given the economic importance of peanut and pine production in Alabama, the specific objectives of this project are to 1) develop and evaluate an AI-powered vision-based aerial robotic HTPP system to support precision peanut breeding, and 2) develop and evaluate an AI-powered vision-based ground robotic system for automated bareroot pine seedling inventory. The specific aims within Objective 1 are to 1) develop a terrain-following UAV-based multi-modal imaging platform; 2) acquire millimeter spatial resolution multi-modal imagery of peanut plants and ground truth measurements at different growth stages; 3) develop and evaluate an AI-driven image processing pipeline to characterize peanut plant architecture, plant stress, and pod count. The specific aims within Objective 2 are to 1) develop an AI-ready ground-based imaging platform for bareroot pine seedling production fields; 2) develop a deep learning-based video processing pipeline that can perform detection, tracking, root collar diameter estimation, shoot height estimation and health status classification for individual seedlings; 3) evaluate the system performance against year, location, pine species, seedlot, growth stage, and travel direction.
Project Methods
Objective 1: Develop and evaluate an AI-powered vision-based aerial robotic HTPP system for precision peanut breeding.A terrain-following UAV-based multi-modal imaging platform will be developed by integrating a stereo 3D camera, a multispectral camera, a thermal camera, a RTK-GPS module, and an edge computer with a hexacopter. A motion planning algorithm based on the 3D sensing and GPS will be developed to maintain a 2m flight altitude above each crop row to obtain georeferenced millimeter spatial resolution RGB, depth, multispectral, and thermal images. Drs. Chen and Sanz-Saez will carry out large-scale peanut breeding trials and small-scale peanut drought experiments, respectively, at the AAES E.V. Smith Research Center. The UAV campaigns will be performed bi-weekly during the growing season and after peanut digging. Manual ground truth data will be collected including final yield and disease rating for the breeding trials, and LAI, leaf-level hyperspectral data and stomatal imprints for the drought experiments. After digging, a 1m long peanut plants will be processed to obtain pod count and harvest index from the drought experiments. An AI-driven image processing pipeline will be developed to characterize peanut plant architecture, plant stress, and pod count after digging. The multi-tasking feature and transfer learning of deep convolutional neural networks (CNNs) will be exploited to realize plant organ (leaf and pod after digging) instance segmentation and tracking in the multi-modal images. Regarding biotic stress, we will focus on peanut leaf diseases. A Mask R-CNN structure will be employed and two additional parallel branches will be added to classify leaf disease symptoms and the severity level, respectively. A pretrained Mask R-CNN model will be modified and fine-tuned on our high-resolution RGB image dataset. As for abiotic stress, we will focus on plant drought response (i.e., stomatal characteristics). Thermal imaging has been used to indirectly quantify stomatal closure and stomatal conductance for different crops in the field. We will train a CNN to perform regression tasks on the thermal images to predict the manually-measured size, density and openness of stomata at the plot level. In addition to plant stress characterization, the peanut pod detection after digging will enable not only pod count but also pod distribution (i.e., clustered or spread out). The number of visible pods after digging can be tested to serve as a proxy for yield estimation. The pipeline will be implemented using the open-source PyTorch deep learning library. In addition to the novel traits, common agronomic traits will also be extracted including plant height, LAI and NDVI (or its variants). The multi-spectral images will be used to reconstruct the hyperspectral signals collected from on the ground. Correlation analysis will be performed to compare the image-derived traits with the ground truth measurements. Additionally, different machine learning models (e.g., random forest, neural network, support vector machine) with the image-derived traits as inputs will be tested to predict complex traits such as yield and harvest index. Machine learning and statistical analyses will be conducted in the statistical program R.Objective 2: Develop and evaluate an AI-powered vision-based ground robotic system for automated bareroot pine seedling inventory. A commercial unmanned ground vehicle (UGV) will be retrofitted to carry a horizontal tool bar to push the pine seedlings over on-the-go such that the woody stems become visible for imaging. Above the tool bar, multiple top-view RGB stereo cameras will be placed to cover the seedling drill rows. The stereo cameras will allow for root collar diameter (RCD) measurement in the 3D space using the depth estimation. To minimize the influence of outdoor lighting conditions, high intensity strobe lights will be used in synchronization with the stereo cameras. A RTK-GPS module and an inertial measurement unit will be used for autonomous navigation and geo-referencing of the video data. A NVIDIA Jetson AGX Xavier developer kit will be employed as the video streaming platform and robot control as it is currently the most powerful edge computer for AI robotics applications. The Robot Operating System (ROS) will be used to develop the navigation control and design a web-based user interface that can be accessed with a mobile device (e.g., smart phone, tablet, laptop, etc.). An CNN model will be developed to perform simultaneous detection, classification, segmentation and tracking of individual seedlings in the stereo video data. The detection task aims to find a bounding box around a seedling. The classification task predicts whether the seedling is healthy or under disease stresses (i.e., Fusiform rust, pitch canker, Rhizoctonia blight, nematodes). The segmentation task delineates the contour of a seedling stem. The tracking task tracks the same seedling from bending to standing so that it is not counted multiple times. The processing pipeline will be built on the state-of-the-art video instance segmentation models. In addition, stereo 3D reconstruction will be efficiently built into the proposed model by fully utilizing the intermediate results in the model. The 3D reconstruction will be used to estimate RCD and seedling height. Dr. Ryan Nadel will design field experiments at three commercial forest tree nurseries. The field design will include two pine species and two seedlots per species. For each seedlot, 25 plots (1 feet long 4 feet wide) will be chosen uniformly distributed across the long nursery beds (500 feet long 4 feet wide per bed). The GPS locations of the 25 plots will be surveyed with a portable RTK-GPS unit for matching the automated georeferenced inventory results. For each plot, a standard counting frame (1 feet long 4 feet wide) will be first laid down and the seedlings within the frame will be manually counted. In addition, 16 seedlings will be tagged with a metallic twist tie around the woody stem at the half seedling height so that they can be identified in the video. After the manual counting and tagging, georeferenced video data with the proposed inventory system will be collected. Finally, all the seedlings within the counting frame will be harvested and their quality parameters, i.e., RCD, shoot height, and health status will be manually measured. The manual measurements from the tagged seedlings will be used for the system evaluation at the individual seedling level. In addition, the means of the seedling quality parameters from the manual and the automated methods will be compared on a per plot basis. The repeatability of the proposed system will be evaluated by comparing the results of three scans of each field in both forward and backward traveling directions. After the plot-level validation, the manual sampling-based method will also be used to estimate the stand counts for four 500-foot long beds (1 bed per seedlot with 2 seedlots per species). In addition, time study of inventory crews will be performed and used to develop a cost benefit comparison with the automated approach. Data collection will be conducted at each of the 3 nurseries 4 times per year (twice for spring inventory and twice for fall inventory).

Progress 10/01/20 to 09/30/21

Outputs
Target Audience:Target audiences include students,researchers inplant sciences andagricultural and biological engineering,citizens, and stakeholders. Specific stakeholders include members of the Southern Forest Tree Nursery Management Cooperative (4 Private, 3 Industrial, 6 State, 1 Federal member). Changes/Problems:Co-PI Ryan Nadel transitioned into the industry. PI Bao will work withDr. Scott Enebak, Director of the SFNMCfor the seedling inventory work. What opportunities for training and professional development has the project provided?One PhD student and two MS students acquiredtraining and professional development activities. ThePhD student acquired hands-on experience in developing2D and 3Dmachine vision sensor development,field data collection and management, and training deep learningmodels for forest tree seedling inventory and peanut yield estimation.The PhD student presented the work onpeanut yield estimation at the 2021American Society of Agricultural and Biological Engineers annual international virtual meeting.TheMS student was trained to operate UAV-based imaging platforms and collect andpost-process georeferencedaerial multimodal imagery data (hyperspectral, multispectral, thermal, RGB) for peanut phenotyping. The second MS student was trained to assist with 3D stereo imagery datacollection and processing. An undergraduate student was trained to assist with pine seedling inventory data collection. How have the results been disseminated to communities of interest?Preliminary results on 1) pod-count-based peanut yield prediction using computer vision and deep learning and 2) pine tree architecture phenotyping were presented at the 2021 ASABE annual international virtual meeting. Research progress on automated bareroot pine seedling inventory was presented at the annual contact meeting tothe members of the Southern Forest Nursery Management Cooperative (SFNMC). In addition, project updates were disseminated to stakeholders throughSFNMC newsletters. What do you plan to do during the next reporting period to accomplish the goals?The machine vision-basedseedling inventory system will be further tested for fall inventories wherebeds are fully covered by seedling needles. A PhD student will develop and evaluateadeep learning-based video processing pipeline for seedling detection and tracking for both spring and fall inventories. A GIS-based user interface will be developed to facilitate visualization and spatial analysis of themachine-derivedseedling density maps.UAV-based multi-modal imagery data will continue to be collectedfor peanut field trials concerning drought tolerance, disease resistance,maturity, and yield during the 2022 growing season. An MS student will investigate machine and deep learning techniques to predict agronomic traits (biomass, yield, disease rating) and physiological traits (photosynthesis and stomatal conductance) from the remotely sensed data. A ground robotic sensor platformwill be developed for peanut and pineseedling production systems towards automated infieldpeanut pod counting and seedling inventory.

Impacts
What was accomplished under these goals? A tractor-based machine vision system was developed and tested for automated bareroot seedlinginventoryat commercial forest tree nurseries in Alabama and Georgia. A deep learning-based pine seedling detection model wastrained for spring inventory, achieving an average seedling counting accuracy of 96%. Extensive manual stand count data were also analyzed and the result shows thatconventional sampling-based inventorypractice can lead to an average accuracyof 95%. The AI-based machine vision system can be expected to be more accurate than the manual approach at least forspring inventory. Potential impacts of this technology for the U.S. forest tree nursery industryinclude reduced labor costs,improved planning for stock shipments and out plantings, andimproved profitability. UAV-based multi-modal imagery data (RGB, multispectral, hyperspectral, and thermal) were collected for several differentpeanut field experiments concerning drought tolerance, maturity, and disease resistance duringthe growing season. The experiments were in collaboration with Co-PIs Chen and Sanz-Saez, Dr. Phat Dang at the National Peanut Research Laboratory, other faculty members at Auburn University.This comprehensive dataset laysthe foundation for developing an AI-poweredhigh-throughput phenotyping framework for peanut and other row crops. In addition,apushcart-based multiview imaging platform was developed and used to image inverted peanut plants in the field for 3yield breeding trials in Alabama. A deep learning model was trained to count peanut pods in the images from three camera anglesand the results were used to calibratea multivariate regression modelto predict yield. Preliminary results show that 59% ofyield variations can be predicted by the proposed method. Theremote yieldsensing systemhas the potential to enable peanut breedersto screen a large population for desired traitswithout labor constraints and thus accelerate the breeding of high-yielding varieties.

Publications

  • Type: Book Chapters Status: Published Year Published: 2021 Citation: Bao Y., Gai J., Xiang L., Tang L. (2021) Field Robotic Systems for High-Throughput Plant Phenotyping: A Review and a Case Study. In: Zhou J., Nguyen H.T. (eds) High-Throughput Crop Phenotyping. Concepts and Strategies in Plant Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-73734-4_2
  • Type: Conference Papers and Presentations Status: Published Year Published: 2021 Citation: RB Puhl, Y Bao, A Sanz-Saez, C Chen. (2021) Infield Peanut Pod Counting using Deep Neural Networks for Yield Estimation. 2021 ASABE Annual International Virtual Meeting. Paper no. 2101080. (doi:10.13031/aim.202101080)
  • Type: Conference Papers and Presentations Status: Published Year Published: 2021 Citation: M Akter, N Niknejad, Y Bao, RB Puhl, K Payn, J Zheng. (2021) Phenotyping of Pine Tree Architecture with Stereo Vision and Deep Learning. 2021 ASABE Annual International Virtual Meeting. Paper no. 2100847.(doi:10.13031/aim.202100847)


Progress 07/28/20 to 09/30/20

Outputs
Target Audience: Nothing Reported Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?One PhD student in Biosystems Engineering acquired hands-on researchexperiences in integrating multi-modal imaging sensors with a UAV for aerial crop phenotypingand developing a machine vision system for forest tree seedling inventory. How have the results been disseminated to communities of interest? Nothing Reported What do you plan to do during the next reporting period to accomplish the goals?We will complete the development of the UAV-based multi-modal imaging system and the ground-based machine vision system. Imagery data and infield ground truth data will be collected multiple time during the growing season. Image processing and machine learningalgorithms will be developed and evaluated to assessthe utility of the two systems in peanutbreeding trials and at commercial forest tree nurseries, respectively.

Impacts
What was accomplished under these goals? A UAV-based multi-modal crop imaging platform and a ground-based pine seedling imaging platform were being developed in the 2.5-month project period.

Publications