Source: ROCHESTER INSTITUTE OF TECHNOLOGY (INC) submitted to NRP
MOBILE AI PLATFORM FOR 3D ROOT PHENOTYPING AND TRAIT ASSESSMENT
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
COMPLETE
Funding Source
Reporting Frequency
Annual
Accession No.
1025616
Grant No.
2021-67021-34199
Cumulative Award Amt.
$453,687.00
Proposal No.
2020-08783
Multistate No.
(N/A)
Project Start Date
Feb 15, 2021
Project End Date
Feb 14, 2025
Grant Year
2021
Program Code
[A1521]- Agricultural Engineering
Recipient Organization
ROCHESTER INSTITUTE OF TECHNOLOGY (INC)
1 LOMB MEMORIAL DR
ROCHESTER,NY 14623
Performing Department
Center for Imaging Science
Non Technical Summary
Roots, a plant's hidden half, play a central role in plant functions and interactions with biotic and physical environments. However significant gaps exist for incorporating root traits in crop improvement programs. The root phenotyping bottleneck continues to be a significant barrier not only for breeding progress but to the full use of genomics data. Current field phenotyping methods based on root excavation, digital imaging, and image analysis are cumbersome and frequently require specialized equipment optimized for model systems and well-characterized crop species. The main goal of this project is to develop a light-weight mobile AI platform based on unsupervised deep neural networks for accurate and fast construction of 3D plant root systems and trait extraction that can be used for multiple crop species. This novel project will have high impacts via numerous tools and algorithms, beginning with 3D root system reconstruction from images, growth tracking, and extraction of key phenotypic variations in the root system. This project will provide tools that will enable plant scientists to increase field root phenotyping throughput and lead to the practical incorporation of root traits in crop breeding programs.
Animal Health Component
25%
Research Effort Categories
Basic
0%
Applied
25%
Developmental
75%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
4021110202050%
1021450106030%
2121110106020%
Goals / Objectives
The main goal of the project is to develop a Mobile AI Platform for 3D root reconstruction and trait assessment that can be easily used by root physiologists, geneticists and breeders, without sophisticated and expensive equipment. We will further extend these methods to extract key phenotypic variations in the root system linked to the performance of crops under biotic and abiotic stresses and different cultural and nutrient management scenarios.Specific Objectives:1) Design a mobile AI platform based on unsupervised deep neural networks to reconstruct 3D plant root systems.2) Develop root trait extraction algorithms based on 3D root models.3) Validate the mobile APP for root trait extraction on fruit, root and tuber crops.4) Outreach and dissemination of results to the scientific community.
Project Methods
Root Image Acquisition For 3D Construction: A user-friendly mobile AI platform will be developed. This platform will require 3 to 5 seconds' video around the root, around 200-300 video frames for modern cameras (60 fps). The App will simultaneously collect the IMU data based on the timestamp. The 200-300 video frames are mainly for covering sufficient perspectives of the root. The App will select 1 frame from every 20-30 frames (in total 15-30 frames) from the entire video to quickly reconstruct the root. Root images of 1 year old Malling 7 EMLA ('M.7'), a commercially important fire blight susceptible apple rootstock, will be used for development of the mobile AI App. Sweetpotato root images will be generated from six cultivars grown in greenhouse conditions.3D Plant Root Reconstruction Framework: An unsupervised-supervised deep structure-from-motion (SfM) based system with inertial measurement will be developed to reconstruct a complete 3D root structure from multiple images as input during the real application. A sensor fusion strategy will be used to incorporate synchronized motion and rotation information from the raw inertial data of mobile devices. The combination of visual camera and IMU sensors will be able to correct the scale issue of the camera motion estimation in monocular SfM in real-time, as the real sizes of the roots are also important factors to analyze. Individual frames' depth maps from the SfM network will be fused to complete 3D root point cloud. Both camera and IMU sensors are equipped in modern mobile phones and tablets, so the proposed framework can be easily deployed on mobile devices.Segmentation Based on Spatial and Spectral Information: In contrast to shape from silhouettes and visual hull, the proposed 3D reconstruction method will not rely on segmentation to reconstruct roots. However, the background will also be reconstructed. We will rely on both spatial and spectral information to segment the 3D root models. As we estimate the scene depth for each frame and the root is close to the camera, the root can be filtered out based on the depth information. Meanwhile, as root color is more uniform and different from background, color information can also support to filter out the background with ambiguous depth information. Therefore, with the usage of spectral and spatial information, the 3D root model will be accurately segmented out.Reconstruction Verification with Ground Truth: The project will also create a synthetic root dataset by Unity to verify 3D reconstruction. A sequence of root images will be generated by continuously rotating each 3D model and projecting them into the 2D image. Then the ego-motions can be obtained by controlling the virtual camera position and rotation, which are similar to the ego-motion data retrieved from the IMU sensor. As the virtual data is created precisely, the ground truth data will be obtained to verify the mobile AI platform quantitatively. The virtual models will also be 3D printed so that we have physical models as the real roots for testing. For real roots, the 3D models reconstructed by our mobile AI platform will be compared with ones scanned by WinRhizo to verify our AI App.Root Growth Detection and Tracking: Once accurate 3D models can be constructed by mobile phones, more accurate and real root representations are available to extract traits and analyze. Many existing methods of extracting traits are based on 2D images. As the roots are physically 3D objects, trait extraction on 3D models provides more precise description of root properties. With the precisely reconstructed 3D roots, further algorithm explorations on trait extraction on 3D roots will be conducted. The 3D root construction model will be used as the input for root tip detection and growth tracking. As the main purpose is to use analytical methods to capture variation in the root system after constructing root system architecture, tracking the root growth and variation is critical.Extraction of Root Smoothness: To calculate the smoothness of the 3D root models, there will be two major steps: the first is to smooth the surface; the second is to calculate the displacement between the original 3D model and the smoothed 3D model. The surface smoothing will be realized by projecting all the contour points onto the local regression line. Specifically, a sliding window will be applied along the contour points. For each point in the 3D model, N (e.g. 20) neighboring points in the window will be sampled surrounding the target point and a local regression line is estimated. Then the current point will be projected onto this line. Applying this algorithm to all the points smooths the contour and in a way brings the points closer. The larger the number of points, the smoother the surface. When there are too many points in the smoothing window, some important features such as corners will be smoothed out. To avoid this situation, the weighted orthogonal least squares fitting will be adapted. Once a 3D root point cloud has been smoothed, we will check each 3D point's displacement between the original 3D point and the smoothed point by calculating the Euclidean distance between them. All the corresponding points' distance will be summed up, which reflects smoothness of the surface. A high distance value indicates the root surface is coarse, and lower is smoother.Extraction of Root Length and Thickness: The root branch length and branching pattern is also an essential factor in analyzing the root. Two solutions are proposed to estimate root branch length. The first is based on morphological operations, which removes the major body of the roots. We will then match the branch tip detection result to the morphological opening result, which further removes all the non-branch points left by the opening operation. The leaving branch area can be used for measuring the shortest path between the two ends of the branches by Dijkstra's algorithm. Another solution to estimate root branch length is based on 3D root skeleton segmentation, which estimates the branch length by calculating the ending points of the 3D skeleton branches. To calculate root thickness, a normal vector will be calculated for each point on the segmented root branches. The normal vector will intersect with the other side of the root surface, which is the diameter of the root at the current point. The average value of all the root points' diameters represents the thickness of the current root branch.Extraction of Root Angles across Different Time Points: The features can be extracted from accurate reconstruction models, which will be explored to analyze the effect of the entire plant growth. The angle between the root branches can be calculated from the root structure. Once the root tips are detected, the branch tips are connected with the root top starting point. If the connection line cannot continuously go through the root points, it will be rotated clockwise around the lower ending point until enough root points fit the line. The last point fitting the rotated line will be a new joint point to connect with the top root point again and rotate to find the next joint point. This process is repeated until the connection line between the joint point and the root top point can continuously pass through the root. The root branch angle can be computed between the 3D lines of the main root and lateral branch.Trait Extraction Verification with Ground Truth: The extracted traits will be compared with ground truth on the virtual roots. We also 3D print the artificial roots as physical models. The traits of the printed roots will be extracted and compared with the ground truth. Additionally, the real root traits will be manually measured to compare with the trait extraction of the mobile App. We will also compare the same traits extracted from our App with the ones extracted from WinRhizo.

Progress 02/15/23 to 02/14/24

Outputs
Target Audience: In the recent reporting period of our project, we have actively engaged with a diverse group of stakeholders, utilizing a blend of virtual and in-person methods such as meetings, workshops, presentations, and publications. Our outreach efforts have encompassed: Apple Growers: We connected with apple growers, discussing how our project's advancements could benefit their orcharding practices. Sweet Potato Growers: Our team engaged with sweet potato growers, focusing on the specific challenges and opportunities in this area of agriculture. Diverse Farming Communities: We reached out to farmers cultivating various crops, exploring the broader applications of our research in different agricultural contexts. Robotics and Computer Vision Scientists: Collaborations with experts in robotics and computer vision were vital in advancing our project's technological aspects. Extension Educators: By working with extension educators, we ensured that the knowledge and technologies developed were effectively disseminated to a wider agricultural audience. Crop Consultants: Our interactions with crop consultants focused on practical applications of our research in optimizing crop management and consulting services. Students: We actively involved students, providing them with educational opportunities and hands-on experience in our research areas. Plant Scientists: Collaboration with plant scientists helped in deepening our research's impact on plant science, particularly in areas like phenotyping and genetic research. Industry Professionals: Engaging with industry professionals allowed us to align our research outcomes with market needs and explore potential commercial applications. Through these varied engagement efforts, we have worked to ensure that our project's findings and innovations are communicated effectively to all relevant sectors, thereby maximizing their impact and utility in both scientific and practical domains. Changes/Problems:The PI is optimistic about the project's progress, despite significant challenges shaped by the Covid-19 pandemic and institutional changes. The pandemic initially disrupted our plans, particularly affecting student recruitment, but this led to a strategic shift towards algorithm development using publicly available datasets. A major development has been the PI's move from the Rochester Institute of Technology (RIT) to the University of Georgia (UGA), a leading agricultural institution. This transition is expected to greatly benefit the project, positioning UGA as its central hub. UGA's rich agricultural resources and expertise complement our project's goals, setting the stage for impactful collaborations and innovations in 3D crop modeling and mobile AI platforms. At UGA, the project stands to gain from direct access to agricultural practices and challenges, offering a comprehensive perspective on farmers' and growers' needs. This insight will guide the tailoring of our project outcomes to create practical, applicable solutions within the agricultural sector. The shift to UGA not only promises logistical advantages but also opens doors to a network of partnerships and collaborations, enhancing idea exchange and potentially leading to transformative advancements in agriculture. This synergy is expected to extend the project's influence, fostering resilience and innovation in the agricultural landscape. Considering these changes and the pandemic's impact, the PI anticipates applying for a no-cost extension, confident that the project's impact will exceed initial expectations. The PI is optimistic about the project's progress, despite significant challenges shaped by the Covid-19 pandemic and institutional changes. The pandemic initially disrupted our plans, particularly affecting student recruitment, but this led to a strategic shift towards algorithm development using publicly available datasets. A major development has been the PI's move from the Rochester Institute of Technology (RIT) to the University of Georgia (UGA), a leading agricultural institution. This transition is expected to greatly benefit the project, positioning UGA as its central hub. UGA's rich agricultural resources and expertise complement our project's goals, setting the stage for impactful collaborations and innovations in 3D crop modeling and mobile AI platforms. At UGA, the project stands to gain from direct access to agricultural practices and challenges, offering a comprehensive perspective on farmers' and growers' needs. This insight will guide the tailoring of our project outcomes to create practical, applicable solutions within the agricultural sector. The shift to UGA not only promises logistical advantages but also opens doors to a network of partnerships and collaborations, enhancing idea exchange and potentially leading to transformative advancements in agriculture. This synergy is expected to extend the project's influence, fostering resilience and innovation in the agricultural landscape. Considering these changes and the pandemic's impact, the PI anticipates applying for a no-cost extension, confident that the project's impact will exceed initial expectations. What opportunities for training and professional development has the project provided?Throughout this project, we haveconducted a series of activitieson educating students, a postdoc, and extension educators in AI methodologies, their applications in agriculture, and the importance of roots, rootstocks, and viruses for crop health and productivity. This comprehensive training covered technical skills such as computer vision, artificial intelligence, ELISA tests, statistical analysis, and interpreting multivariate data, contributing to the technical proficiency and personal development of participants like graduate student Praveen Kumar and undergraduates Bradley Gartner, Grant Bosworth, Jonathan Crowell, Caleb Johnson, and Linden Adamson. Key to our program was fostering teamwork and collaboration, vital for effective communication and achieving common goals in industry and research. This approach also sharpened problem-solving skills and critical thinking, enabling participants to analytically approach challenges and devise innovative solutions, particularly in AI's agricultural applications. Our project attracted interest from the farming community and industry leaders like Ford and General Motors, offering students firsthand industry insights and practical AI applications in agriculture. This bridged academic learning with real-world applications, enriching their educational experience and enhancing their career prospects. The training program's highlights include: 1) Comprehensive Knowledge Transfer: Diverse students gained AI insights and agricultural applications, with a focus on collaborative learning and interdisciplinary knowledge exchange. 2) Skill Development: Students developed a holistic technical skillset in modern agricultural practices, preparing them for impactful contributions. 3) Extension Educators' Engagement: Extension educators were integrated into training, enhancing their AI understanding and its agricultural integration. 4) Industry Partnerships: Collaborations with industry leaders like Ford and General Motors amplified the project's impact, providing practical insights and resources. Our ongoing commitment is to empower students with transformative knowledge, foster academia-industry collaboration, and utilize industry expertise for innovative agricultural solutions. How have the results been disseminated to communities of interest?Our project has achieved significant milestones in publishing and presenting our research in AI, robotics, and agriculture. We have successfully published two papers in a top AI journal and conference proceedings, including one in IEEE Transactions on Imaging Processing (2021) on plant root reconstruction using a single image. Our presentations at major conferences and seminars have drawn considerable attention. At the AI computer vision conference WACV, with over 600 attendees, we presented on "3D Modeling Beneath Ground: Plant Root Detection and Reconstruction Based on Ground-Penetrating Radar." Additionally, our research on "3D Crop Structure Modeling Based on Mobile Platforms" has been showcased in six seminars, including at industry and university venues such as Google (2021) and various departments within the University of Florida (UF) and the University of Georgia (UGA) from 2021 to 2022. We havealso presented at Cornell Cooperative Extension-Lake Ontario Fruit team's summer tour in 2021, discussing "Underground Contributions to Declining Blocks in High-Density Orchards." Furthermore, our paper presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) and 10 invited talks at universities, companies, and conferences have broadened the reach of our research in 3D crop modeling. Key presentations developed from the project include topics like soybean breeding at UGA, AI in agriculture at the Georgia Climate Conference, and advanced 3D crop modeling at various scientific forums. We havealso organized workshops at CVPR on "Visual Odometry and Computer Vision Applications based Location Clues," further disseminating our research. Overall, our outreach and dissemination activities have reached over 600 individuals, including farmers, breeders, and researchers in agriculture, AI, robotics, and computer vision, significantly contributing to the field and fostering collaboration between academia and industry. What do you plan to do during the next reporting period to accomplish the goals?Our commitment to the goals outlined in our proposal continues to propel our project forward. In the upcoming reporting period, we are focusing on several key tasks to further our research objectives: Advanced Skeleton Extraction: We are developing sophisticated methods to extract 2D and 3D skeletons from images and models. This crucial step will reveal complex root system structures, offering vital insights into plant growth and development. Enhanced 3D Root Reconstruction: We aim to improve the accuracy of 3D root model reconstruction, using both virtual simulations and real-world data. By applying advanced imaging and computational techniques, we'll capture the complexities of root architectures with greater precision. Expanded Trait Extraction: Our work includes creating algorithms for comprehensive trait extraction from 2D and 3D models. We arefocusing on various root traits like angles, curvatures, diameter, length, and volume, to gain a deeper understanding of root systems and their agricultural impact. Dynamic Evaluation: We areconducting longitudinal evaluations of virus status and root system development in grafted apples and sweet potatoes. This research extends to greenhouse and field studies, examining how health, traits, and growth evolve over time. Multivariate Analysis: Our project involves detailed multivariate analyses to explore the relationships between viruses, root traits, and their combined effects on the health of apple trees and sweet potatoes. This analysis is essential to decipher the complex interactions affecting plant health. Expanding Horizons: We areextending our root modeling and trait extraction methods to various crop data, broadening the applicability of our research and driving innovation across different agricultural sectors. Dissemination of Insights: Continuously committed to knowledge sharing, we plan to present our findings at scientific meetings and publish in academic journals. This dissemination encourages idea exchange, advances the field, and promotes collaborative research. As we continue to resolvethis complex research, our determination remains strong to shed light on the nuances of plant root systems, thereby enhancing our understanding of agriculture and contributing to its sustainable future.

Impacts
What was accomplished under these goals? Our project has made significant advancements in 3D reconstruction of plant root systems and crop fields, focusing on innovative modeling and precise trait extraction: 1. 3D Modeling for Roots and Crops: 1 )We've developed an unsupervised learning scheme for 3D root system architecture reconstruction, significantly surpassing existing 3D reconstruction models in handling complex root structures. 2) A deep neural network for plant root reconstruction, adapted for mobile platforms, enhances pose estimation accuracy using the phone's IMU unit. 3) Ground Penetrating Radar (GPR) technology is used to detect underground objects, with a graph neural network reconstructing root shapes, aiding in non-destructive underground object shape reconstruction. 4) Our large-scale 3D crop field reconstruction uses an unsupervised structure-from-motion framework, effectively tackling common challenges in crop datasets and aiding in plant phenotyping. 5) Extensive experimentation has validated the precision of our 3D crop reconstruction algorithm, marking a significant contribution to large-scale 3D crop reconstruction techniques. 2. 3D Root Trait Extraction: 1) Our algorithms, based on 3D root models, offer improved accuracy in measuring root traits compared to 2D imaging, allowing for detailed analyses of root volume, size, and other traits like angles and curvatures. 2) We aredeveloping algorithms to adapt to different root systems, enhancing their applicability across various plant species. 3. 3D Modeling for Apple Tree and Sweet Potato Roots: 1) Our dataset includes 800 3D models of apple tree roots and 400 of sweet potato roots, each accompanied by multiple 2D images for comprehensive analysis. 2) For apple tree roots, we've incorporated randomness in stem shapes and branching patterns within Unity3D, reflecting natural diversity. Sweet potato root models mirror natural variability, with distinct stem counts and diameters, representing their unique root characteristics. 4. Mobile System Evaluation for Root Modeling and Trait Extraction: 1) We validated a mobile application for root trait extraction, involving creating virtual root models with ground truth and capturing extensive root images in greenhouse and field settings. 2) Experiments in high-density apple orchards and detailed image analysis provide critical data for manual and digital root system assessments. Through these focused areas, our project enhances agricultural research, particularly in high-density orchards and diverse crop types, contributing significantly to the understanding of plant growth and environmental interaction.

Publications


    Progress 02/15/21 to 12/12/23

    Outputs
    Target Audience:In the recent reporting period of our project, we have actively engaged with a diverse group of stakeholders, utilizing a blend of virtual and in-person methods such as meetings, workshops, presentations, and publications. Our outreach efforts have encompassed: 1. Apple Growers: We connected with apple growers, discussing how our project's advancements could benefit their orcharding practices. 2. Sweet Potato Growers: Our team engaged with sweet potato growers, focusing on the specific challenges and opportunities in this area of agriculture. 3. Diverse Farming Communities: We reached out to farmers cultivating various crops, exploring the broader applications of our research in different agricultural contexts. 4. Robotics and Computer Vision Scientists: Collaborations with experts in robotics and computer vision were vital in advancing our project's technological aspects. 5. Extension Educators: By working with extension educators, we ensured that the knowledge and technologies developed were effectively disseminated to a wider agricultural audience. 6. Crop Consultants: Our interactions with crop consultants focused on practical applications of our research in optimizing crop management and consulting services. 7. Students: We actively involved students, providing them with educational opportunities and hands-on experience in our research areas. 8. Plant Scientists: Collaboration with plant scientists helped in deepening our research's impact on plant science, particularly in areas like phenotyping and genetic research. 9. Industry Professionals: Engaging with industry professionals allowed us to align our research outcomes with market needs and explore potential commercial applications. Through these varied engagement efforts, we have worked to ensure that our project's findings and innovations are communicated effectively to all relevant sectors, thereby maximizing their impact and utility in both scientific and practical domains. Changes/Problems:The PI is optimistic about the project's progress, despite significant challenges shaped by the Covid-19 pandemic and institutional changes. The pandemic initially disrupted our plans, particularly affecting student recruitment, but this led to a strategic shift towards algorithm development using publicly available datasets. A major development has been the PI's move from the Rochester Institute of Technology (RIT) to the University of Georgia (UGA), a leading agricultural institution. This transition is expected to greatly benefit the project, positioning UGA as its central hub. UGA's rich agricultural resources and expertise complement our project's goals, setting the stage for impactful collaborations and innovations in 3D crop modeling and mobile AI platforms. At UGA, the project stands to gain from direct access to agricultural practices and challenges, offering a comprehensive perspective on farmers' and growers' needs. This insight will guide the tailoring of our project outcomes to create practical, applicable solutions within the agricultural sector. The shift to UGA not only promises logistical advantages but also opens doors to a network of partnerships and collaborations, enhancing idea exchange and potentially leading to transformative advancements in agriculture. This synergy is expected to extend the project's influence, fostering resilience and innovation in the agricultural landscape. Considering these changes and the pandemic's impact, the PI anticipates applying for a no-cost extension, confident that the project's impact will exceed initial expectations. The PI is optimistic about the project's progress, despite significant challenges shaped by the Covid-19 pandemic and institutional changes. The pandemic initially disrupted our plans, particularly affecting student recruitment, but this led to a strategic shift towards algorithm development using publicly available datasets. A major development has been the PI's move from the Rochester Institute of Technology (RIT) to the University of Georgia (UGA), a leading agricultural institution. This transition is expected to greatly benefit the project, positioning UGA as its central hub. UGA's rich agricultural resources and expertise complement our project's goals, setting the stage for impactful collaborations and innovations in 3D crop modeling and mobile AI platforms. At UGA, the project stands to gain from direct access to agricultural practices and challenges, offering a comprehensive perspective on farmers' and growers' needs. This insight will guide the tailoring of our project outcomes to create practical, applicable solutions within the agricultural sector. The shift to UGA not only promises logistical advantages but also opens doors to a network of partnerships and collaborations, enhancing idea exchange and potentially leading to transformative advancements in agriculture. This synergy is expected to extend the project's influence, fostering resilience and innovation in the agricultural landscape. Considering these changes and the pandemic's impact, the PI anticipates applying for a no-cost extension, confident that the project's impact will exceed initial expectations. What opportunities for training and professional development has the project provided?Throughout this project, we have conducted a series of activities on educating students, a postdoc, and extension educators in AI methodologies, their applications in agriculture, and the importance of roots, rootstocks, and viruses for crop health and productivity. This comprehensive training covered technical skills such as computer vision, artificial intelligence, ELISA tests, statistical analysis, and interpreting multivariate data, contributing to the technical proficiency and personal development of participants like graduate student Praveen Kumar and undergraduates Bradley Gartner, Grant Bosworth, Jonathan Crowell, Caleb Johnson, and Linden Adamson. Key to our program was fostering teamwork and collaboration, vital for effective communication and achieving common goals in industry and research. This approach also sharpened problem-solving skills and critical thinking, enabling participants to analytically approach challenges and devise innovative solutions, particularly in AI's agricultural applications. Our project attracted interest from the farming community and industry leaders like Ford and General Motors, offering students firsthand industry insights and practical AI applications in agriculture. This bridged academic learning with real-world applications, enriching their educational experience and enhancing their career prospects. The training program's highlights include: 1) Comprehensive Knowledge Transfer: Diverse students gained AI insights and agricultural applications, with a focus on collaborative learning and interdisciplinary knowledge exchange. 2) Skill Development: Students developed a holistic technical skillset in modern agricultural practices, preparing them for impactful contributions. 3) Extension Educators' Engagement: Extension educators were integrated into training, enhancing their AI understanding and its agricultural integration. 4) Industry Partnerships: Collaborations with industry leaders like Ford and General Motors amplified the project's impact, providing practical insights and resources. Our ongoing commitment is to empower students with transformative knowledge, foster academia-industry collaboration, and utilize industry expertise for innovative agricultural solutions. How have the results been disseminated to communities of interest?Our project has achieved significant milestones in publishing and presenting our research in AI, robotics, and agriculture. We have successfully published two papers in a top AI journal and conference proceedings, including one in IEEE Transactions on Imaging Processing (2021) on plant root reconstruction using a single image. Our presentations at major conferences and seminars have drawn considerable attention. At the AI computer vision conference WACV, with over 600 attendees, we presented on "3D Modeling Beneath Ground: Plant Root Detection and Reconstruction Based on Ground-Penetrating Radar." Additionally, our research on "3D Crop Structure Modeling Based on Mobile Platforms" has been showcased in six seminars, including at industry and university venues such as Google (2021) and various departments within the University of Florida (UF) and the University of Georgia (UGA) from 2021 to 2022. We have also presented at Cornell Cooperative Extension-Lake Ontario Fruit team's summer tour in 2021, discussing "Underground Contributions to Declining Blocks in High-Density Orchards." Furthermore, our paper presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) and 10 invited talks at universities, companies, and conferences have broadened the reach of our research in 3D crop modeling. Key presentations developed from the project include topics like soybean breeding at UGA, AI in agriculture at the Georgia Climate Conference, and advanced 3D crop modeling at various scientific forums. We have also organized workshops at CVPR on "Visual Odometry and Computer Vision Applications based Location Clues," further disseminating our research. Overall, our outreach and dissemination activities have reached over 600 individuals, including farmers, breeders, and researchers in agriculture, AI, robotics, and computer vision, significantly contributing to the field and fostering collaboration between academia and industry. What do you plan to do during the next reporting period to accomplish the goals?Our commitment to the goals outlined in our proposal continues to propel our project forward. In the upcoming reporting period, we are focusing on several key tasks to further our research objectives: Advanced Skeleton Extraction: We are developing sophisticated methods to extract 2D and 3D skeletons from images and models. This crucial step will reveal complex root system structures, offering vital insights into plant growth and development. Enhanced 3D Root Reconstruction: We aim to improve the accuracy of 3D root model reconstruction, using both virtual simulations and real-world data. By applying advanced imaging and computational techniques, we'll capture the complexities of root architectures with greater precision. Expanded Trait Extraction: Our work includes creating algorithms for comprehensive trait extraction from 2D and 3D models. We are focusing on various root traits like angles, curvatures, diameter, length, and volume, to gain a deeper understanding of root systems and their agricultural impact. Dynamic Evaluation: We are conducting longitudinal evaluations of virus status and root system development in grafted apples and sweet potatoes. This research extends to greenhouse and field studies, examining how health, traits, and growth evolve over time. Multivariate Analysis: Our project involves detailed multivariate analyses to explore the relationships between viruses, root traits, and their combined effects on the health of apple trees and sweet potatoes. This analysis is essential to decipher the complex interactions affecting plant health. Expanding Horizons: We are extending our root modeling and trait extraction methods to various crop data, broadening the applicability of our research and driving innovation across different agricultural sectors. Dissemination of Insights: Continuously committed to knowledge sharing, we plan to present our findings at scientific meetings and publish in academic journals. This dissemination encourages idea exchange, advances the field, and promotes collaborative research. As we continue to resolve this complex research, our determination remains strong to shed light on the nuances of plant root systems, thereby enhancing our understanding of agriculture and contributing to its sustainable future.

    Impacts
    What was accomplished under these goals? Our project has made significant advancements in 3D reconstruction of plant root systems and crop fields, focusing on innovative modeling and precise trait extraction: 1. 3D Modeling for Roots and Crops: 1 )We've developed an unsupervised learning scheme for 3D root system architecture reconstruction, significantly surpassing existing 3D reconstruction models in handling complex root structures. 2) A deep neural network for plant root reconstruction, adapted for mobile platforms, enhances pose estimation accuracy using the phone's IMU unit. 3) Ground Penetrating Radar (GPR) technology is used to detect underground objects, with a graph neural network reconstructing root shapes, aiding in non-destructive underground object shape reconstruction. 4) Our large-scale 3D crop field reconstruction uses an unsupervised structure-from-motion framework, effectively tackling common challenges in crop datasets and aiding in plant phenotyping. 5) Extensive experimentation has validated the precision of our 3D crop reconstruction algorithm, marking a significant contribution to large-scale 3D crop reconstruction techniques. 2. 3D Root Trait Extraction: 1) Our algorithms, based on 3D root models, offer improved accuracy in measuring root traits compared to 2D imaging, allowing for detailed analyses of root volume, size, and other traits like angles and curvatures. 2) We are developing algorithms to adapt to different root systems, enhancing their applicability across various plant species. 3. 3D Modeling for Apple Tree and Sweet Potato Roots: 1) Our dataset includes 800 3D models of apple tree roots and 400 of sweet potato roots, each accompanied by multiple 2D images for comprehensive analysis. 2) For apple tree roots, we've incorporated randomness in stem shapes and branching patterns within Unity3D, reflecting natural diversity. Sweet potato root models mirror natural variability, with distinct stem counts and diameters, representing their unique root characteristics. 4. Mobile System Evaluation for Root Modeling and Trait Extraction: 1) We validated a mobile application for root trait extraction, involving creating virtual root models with ground truth and capturing extensive root images in greenhouse and field settings. 2) Experiments in high-density apple orchards and detailed image analysis provide critical data for manual and digital root system assessments. Through these focused areas, our project enhances agricultural research, particularly in high-density orchards and diverse crop types, contributing significantly to the understanding of plant growth and environmental interaction.

    Publications


      Progress 02/15/22 to 02/14/23

      Outputs
      Target Audience:During this project reporting period, we have reached out to various stakeholders through virtual and in-person meetings, workshops, presentations, and publications. 1) Apple growers 2) Sweet potato growers 3) Farmers of various crops 4) Robotics and computer vision scientists 5) Extension educators 6) Crop consultants 7) Students 8) Plant scientists 9) Industry Changes/Problems:The PI foresees no problems in project's progress. The landscape of our project has been significantly shaped by the reverberations of the Covid-19 pandemic, impacting various facets along the way. In the inaugural year of the project, the unforeseen pandemic forced us to recalibrate our plans, resulting in the cancellation of a planned student recruitment effort. This unexpected twist compelled us to pivot our attention towards the development of algorithms, capitalizing on publicly available datasets to fuel our progress. Another pivotal transformation was marked by the Principal Investigator's (PI) transition from the Rochester Institute of Technology (RIT) to the University of Georgia (UGA). This change of affiliation harbors immense potential, as UGA, renowned as a leading land-grant institution specializing in agriculture, offers a fertile ground for cultivating innovation. This transition accentuates the aspiration to designate UGA as the epicenter of the project's activities. The synergy between UGA's rich agricultural expertise and the project's vision is a force to be reckoned with. By forging robust collaborations with UGA's distinguished faculty members in the agriculture realm, the PI is poised to sculpt and validate the mobile AI platform for 3D crop modeling in a more holistic manner. The profound impact of this transition reaches beyond academia, resonating with diverse agricultural communities, thereby amplifying the project's potential for meaningful change. Beyond the logistical advantages, the relocation to UGA ushers in a host of unparalleled opportunities. The bounteous access to crop fields unfolds an era of firsthand engagement with agricultural practices, fostering an all-encompassing grasp of the challenges and nuances faced by farmers and growers. Armed with this nuanced understanding, the project's outcomes can be tailored to address their specific needs, yielding practical applications that reverberate across the broader agricultural tapestry. The affiliation shift to UGA is a catalyst for collaboration, knowledge sharing, and the potential for even more expansive impacts in the agricultural domain. It opens doors to a mosaic of partnerships, rendering the exchange of ideas a seamless endeavor. Through this dynamic interplay, the ripple effects of our efforts have the potential to cascade into transformative advancements, fostering resilience and innovation within the agricultural landscape. Influenced by the pandemic and marked by the transition to UGA, it is likely that the PI will apply for a no-cost extension. With that, the PI believes that the impact will be broader than our expectation. What opportunities for training and professional development has the project provided?This endeavor not only contributed to the advancement of technical expertise but also nurtured the growth of human resources among the participants. Beyond the acquisition of technical prowess by graduate student Praveen Kumar and five undergraduate students--Bradley Gartner, Grant Bosworth, Jonathan Crowell, Caleb Johnson, and Linden Adamson--the training initiative was meticulously crafted to cultivate a comprehensive spectrum of transferable skills and attributes pivotal for their professional development. Central to the training program was the cultivation of collaboration and teamwork among the participants. Collective engagement in projects and tasks served as a catalyst for effective communication, seamless coordination, and the art of synergizing individual strengths to achieve shared objectives. These collaborative experiences laid a sturdy foundation for seamless adaptation to teamwork-driven environments prevalent in both industry and research domains. The training curriculum placed a significant emphasis on honing problem-solving prowess and nurturing critical thinking acumen. Participants were actively encouraged to dissect challenges analytically, conceptualize innovative solutions, and apply them within the context of AI methodologies as they relate to agriculture. This honed their ability to navigate intricate problem landscapes, critically assess various options, and arrive at well-informed decisions--an indispensable skill set in any professional pursuit. Beyond the confines of academia, the project garnered keen interest from farmers and industry giants such as Ford Motor Company and General Motors. These industry partners engaged closely with the students, fostering a dynamic atmosphere for collaborative research undertakings. This direct interaction provided students with firsthand exposure to farming and industry practices and facilitated the exchange of insights into the practical application of AI within the realm of agriculture. This interaction bridged the gap between academic theory and real-world application, augmenting their educational journey with insights that resonate in corporate settings. As a result, this engagement with farming and industry pioneers not only enriched their learning experience but also bestowed them with invaluable skills and knowledge that are certain to elevate their career prospects in the future. Student Training and Education: a) Comprehensive Knowledge Transfer: We imparted comprehensive training and education to a diverse group of students, nurturing their understanding of AI methodologies and their transformative applications in the realm of agriculture. b) Diverse Student Cohort: The training initiative encompassed a spectrum of learners, including graduate student Praveen Kumar and five talented undergraduate scholars--Bradley Gartner, Grant Bosworth, Jonathan Crowell, Caleb Johnson, and Linden Adamson. c) Cultivating Collaborative Learning: The program emphasized the power of collaboration, fostering an environment where students actively engaged in interdisciplinary learning and shared insights. This approach facilitated dynamic knowledge exchange and mutual growth. d) Holistic Technical Skillset: Students gained proficiency in an array of technical skills vital to modern agriculture, including computer vision, artificial intelligence, deep learning, robotics, and phenotyping. This comprehensive skillset primes them for impactful contributions to the field. Collaboration with Extension Educators: a) Extension Educators' Engagement: In a proactive move, we involved extension educators in the training process. This initiative aimed to broaden their understanding of AI methodologies and their potential integration into agricultural practices. b) Fostering Knowledge Exchange: By bridging the gap between students and extension educators, we facilitated meaningful discussions and knowledge sharing. This collaborative approach ensured that AI concepts translated effectively into practical solutions, enriching agricultural practices. Industry Partnerships: a) Leveraging Industry Leaders: Our project forged significant alliances with industry titans, most notably Ford Motor Company and General Motors. b) Amplifying Project Impact: These partnerships emerged as catalysts, amplifying the value of our project outcomes and insights. The insights garnered from industry leaders enriched the project's direction and outcomes. c) Expertise and Resource Augmentation: Engaging with industry partners proved pivotal in obtaining specialized expertise and accessing additional resources. This collaboration extended beyond theoretical knowledge, allowing us to capitalize on practical insights and resources, thereby magnifying the project's overall impact. As we stride forward, our commitment remains unwavering--to equip students with transformative knowledge, bridge the gap between academia and practice through extension collaboration, and harness the expertise of industry leaders to catalyze innovative solutions in agriculture. How have the results been disseminated to communities of interest?We have published one paper to disseminate the research outcomes, showing at the top robotics conference IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). We also have presented the research outcomes on 3D modeling for crops in 10 invited talks, including Universities, Companies, and Conference keynote talks and panel discussions. The PI also organized CVPR workshop on "Visual Odometry and Computer Vision Applications based Location Clues" to disseminate related research. Through the outreach and dissemination activities, the 3D crop modeling and treat extraction research has been disseminated to farmers, breeders, and researchers in agriculture, AI, robotics and computer vision, over 600 people. Below are the presentations developed from the project in the last reporting period. 1. Soybean breeding, Midville field Day, UGA Southeast Research and Education Center, 2023 2. 3D Crop Structure Modeling Based on Mobile Platforms, University of Georgia Peanut, Plant Center Retreat 2022 3. 3D Crop Structure Modeling Based on Mobile Platforms, Penn State University, Department of Electrical Engineering and Computer Science, 2022 4. Robot Vision for Building More Accessible Cyber Infrastrctures, UGA Cyber-Physical Systems Symposium 2022 5. Unsupervised 3D Perception under Multi-Sensing Modalities Based on Mobile Platforms, 14th annual conference of the prognostics and health management society, PHM panel of Cyber-Physical Systems for Smart Manufacturing, 2022 6. Lightweight 3D Mapping and Localization, General Motor Company, 2022 7. 3D Perception under Multi-Sensing Modalities Based on Mobile Platforms, University of Maryland, Department of Computer Science, 2022 8. 3D Perception under Multi-Sensing Modalities Based on Mobile Platforms, Amazon Research, 2023 9. AI for Agriculture and Climate, Georgia Climate Conference, Panel on Opportunities and Risks of AI on Climate Solutions, 2023 10. From Beneath to Above Ground: Advancing 3D Crop Structure Modeling Under Diverse Mobile Systems, IEEE Robotics and Automation Society Agricultural Robotics and Automation webinar, 2023 What do you plan to do during the next reporting period to accomplish the goals?Our unswerving dedication to the stipulated objectives in our proposal drives us to forge ahead, pushing the boundaries of our project's potential. As we embark on the next reporting period, a constellation of specific tasks will guide our focused efforts, propelling us closer to our goals: Advanced Skeleton Extraction: A pivotal task entails the development of cutting-edge methods for extracting 2D and 3D skeletons from both images and models. This endeavor will lay the groundwork for unraveling the intricate structures of root systems, enabling us to glean critical insights into plant growth and development. Enhanced 3D Root Reconstruction: Bolstering our commitment to precision, we are set to refine the reconstruction process for 3D root models across virtual simulations and real-world datasets. Leveraging advancements in imaging and computational techniques, we aim to attain an unprecedented level of accuracy in capturing the complexities of root architectures. Expanded Trait Extraction: Our quest for comprehensive understanding compels us to develop algorithms capable of extracting a myriad of traits from both 2D images and 3D root models. From angles, curvatures, diameter, length, to volume, these attributes offer a multifaceted glimpse into the intricate world of plant root systems, providing invaluable insights into their role in agricultural outcomes. Dynamic Evaluation: A pivotal dimension involves the systematic assessment of virus status, root system development, and the evolution of root traits in grafted apples and sweet potatoes. This longitudinal evaluation extends to controlled greenhouse settings and research crop fields, shedding light on the interplay between health, traits, and growth over time. Multivariate Analysis: Our scientific journey extends to conducting rigorous multivariate analyses, aiming to untangle the intricate relationships between viruses, root traits, and their synergistic effects on the health and vigor of apple trees and sweet potatoes. This step is vital in unraveling the complex web of interactions that shape plant outcomes. Expanding Horizons: The scope of our work stretches beyond existing boundaries, encompassing the extension of root modeling and trait extraction methodologies to diverse forms of crop data. By adapting our methods, we aim to broaden the applicability of our findings across various crops, driving innovation beyond our initial focus. Dissemination of Insights: Our commitment to sharing knowledge is underlined by our dedication to communicate results through dynamic presentations at extension and scientific meetings, alongside publishing findings in academic journals. This dissemination fosters a vibrant exchange of ideas, advancing the field and fostering collaboration. As we navigate this intricate landscape of research and innovation, we remain steadfast in our resolve to illuminate the complexities of plant root systems, enriching our understanding of agriculture and nurturing its sustainable future.

      Impacts
      What was accomplished under these goals? 1) Large-scale 3D modeling In this project, we pioneer a novel unsupervised structure-from-motion (SfM) framework that revolves around machine learning principles. The task of conducting large-scale, in-situ 3D reconstructions of crop fields poses significant challenges. These 3D representations of crop structures play a pivotal role in plant phenotyping, capturing attributes that exert substantial influence on crop growth and yield. Current endeavors in this field primarily concentrate on plants at close range, with only a sparse array of deep learning-based methods tailored for the intricate task of large-scale 3D crop reconstruction. This scarcity is attributed to the limited availability of extensive crop sensing data required for such endeavors. The newly developed framework is meticulously designed to cater specifically to the challenges of reconstructing extensive 3D structures within crop fields. A noteworthy feature of our proposed framework is its ability to effectively address the common issue of imprecise depth inference arising from repetitive patterns inherent in crop datasets. This capability renders our approach particularly suitable for generating highly accurate 3D reconstructions of crops on a large scale. Through comprehensive experimentation conducted on the aforementioned crop dataset, our research demonstrates the remarkable precision and resilience of our 3D crop reconstruction algorithm. By overcoming the limitations of previous methods and embracing the potential of unsupervised learning, our work contributes significantly to the advancement of accurate and reliable large-scale 3D crop reconstruction techniques. 2) Apple Seedling Root 3D Model Design The apple tree, a widely distributed rose plant found across the globe, serves as a representative example of general fruit tree species. With its ubiquity and significance, we selected the apple seedling's root structure as the basis for creating prototype 3D models. An examination of real apple seedling roots revealed their composition, consisting of a central stem and multiple branching structures. Notably, the stem boasts the largest diameter and tends to be the longest among all rhizomes. Branches, slightly smaller in diameter, exhibit intricate growth directions, at times intertwining and forming knots. Within the Unity3D design interface, we introduced randomness to determine the stem's shape. By assigning distinct values to each root, the resulting stem shape diverged for each unique root. The growth scale value, while not affecting root shape, governed the extent of root tip expansion, uniformly set at 1 for all seedlings. As all root models were seedlings, stem length showed minimal variation. In contrast, stem diameters varied, encompassing values ranging from 0.8 to 1.2. Given the apple seedling's numerous branches, shape and quantity were governed by randomization. Branch diameters spanned 0.3 to 0.7, with branch numbers mirroring real-world instances at 10 to 30. Crinkliness maintained a value around 0.2, reflecting natural attributes. Given the greater complexity of branch growth directions compared to stems, Seek Sun values for branches were adjusted to 0.2 to 0.3. This iterative approach resulted in a comprehensive dataset of 800 distinct apple seedling root 3D models. These models featured detailed parameters including branch count and thickness, each saved in obj format to allow for in-depth examination through common viewing software. 3) Sweet Potato Root 3D Model Design The sweet potato, a twining herbaceous vine belonging to the Dioscoreaceae family, features a root structure characterized by multiple elongated stems, each adorned with numerous whiskers. Through thorough observation, it was evident that real sweet potato roots typically bear between 6 and 15 stems. To replicate this diversity within Unity3D, the stem count for all 3D models was variably set within this range. In terms of diameter, the sweet potato's stem is notably thinner compared to that of an apple. Therefore, a diameter value of approximately 0.3 was chosen. Given the generally straight shape of sweet potato stems, the Crinkliness value was confined to a small range, around 0.0002--significantly lower than the values applied to apple seedling roots. This delineates the comprehensive parameter-setting process for the Sweet Potato Roots 3D Models within our dataset. Repetition of this process yielded a total of 400 distinct instances, with varied values introduced randomly each time. The culmination of this approach produced 400 unique 3D models of sweet potato roots. Similar to the apple seedling roots' 3D models, each sweet potato root 3D model comprises detailed parameters and is stored in obj format, facilitating comprehensive examination and analysis. 4) Evaluation of Root system architecture and root traits Root samples of apples pulled out from commercial orchard, and research orchard at Cornell were processed to evaluate of Root system architecture and root traits using the setup developed and reported last year as follows. The root systems were subsequently washed with tap water until soil was removed. The washed root systems were hung with a screw threaded into the trunk, maintaining a position similar to that of the tree in the field. A ruler was placed next to the trunk, as a size reference for image scale and white background. Images of the entire root system were captured by rotating the root system from 0 to 270 degrees using a Canon EOS Rebel T5 camera. These images were analyzed using ImageJ software version 1.8 (https://imagej.nih.gov/ij/). For the analysis, the scale was settled by converting pixels in millimeters using the ruler and images were converted to a binary format to measure the particle area. The average particle area of the four images (0, 90, 180 and 270) was considered as the projected area of the complete root system. Width and depth of the root system were also measured in those pictures using ImageJ tools. Trunk diameter of the rootstock (ØRootstock) and scion (ØScion) were manually measured using a caliper and their ratio (ØR/ØS) was calculated dividing rootstock diameter by scion diameter. The length of rootstock growth underground (RoostockUG) was also manually measured (cm). Afterwards, every root coming from the rootstock stem was cut and also scanned in an Expression 12000XL scanner (Epson Corporation, Suwa, Nagano, Japan). The total number of roots were considered as the number of primary roots. The scanner was set to get 16-bit grayscale images with 600dpi resolution using the transparent unit. Afterwards, the root samples were dried at 80°C for five days, to get the total dry weight of the roots. Scanned images were analyzed using RhizoVission Explorer version 2.0.3 (https://doi.org/10.5281/zenodo.5121845) (Seethepalli and York, 2020). RhizoVision provided maximum diameter, root surface, length and volume, both total values and split by root diameter ranges. Three root diameter ranges have been considered, fine roots (< 2mm), medium roots (from 2 to 5 mm) and coarse roots (> 5mm). 5) Outreach and dissemination of results to the scientific community. We have made one paper published to disseminate the research outcomes, showing on the top robotics conference IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). We also have presented the research outcomes on 3D modeling for crops in 10 invited talks, including Universities, Companies, and Conference keynote talks and panel discussions. Through the outreach and dissemination activities, the 3D crop modeling and treat extraction research has been disseminated to farmers, breeders, researchers in agriculture, AI, robotics and computer vision, over 600 people

      Publications

      • Type: Conference Papers and Presentations Status: Accepted Year Published: 2023 Citation: Guoyu Lu, Bird-view 3D Reconstruction for Crops with Repeated Textures, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023.
      • Type: Journal Articles Status: Published Year Published: 2023 Citation: Guoyu Lu, Sheng Li, Gengchen Mai, Jin Sun, Dajiang Zhu, Lilong Chai, Haijian Sun et al. "AGI for Agriculture." arXiv preprint arXiv:2304.06136, 2023


      Progress 02/15/21 to 02/14/22

      Outputs
      Target Audience:During this project reporting period we have reached out to various stakeholders through in-person meetings, presentations and publications. 1) Apple growers 2) Sweet potato growers 3) AI scientists 4) Extension educators 5) Crop consultants 6) Students 7) Plant scientists Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?We have been training and educating students, a postdoc, and extension educators to understand AI methodologies, ways AI can help agriculture,the role of roots,rootstocks, and viruses for crophealth, and productivity. This training included several technical skills including computer vision, artificial intelligence,ELISA tests, statistical analysis and interpretation of multivariate data. How have the results been disseminated to communities of interest?We have made two papers published on the top AI journal(Simultaneous Direct Depth Estimation and Synthesis Stereo for Single Image Plant Root Reconstruction, IEEE Transactions on Imaging Processing, 2021) and conference proceedings. We have given a presentation at an AIcomputer vision conference (WACV) with over 600 participants on the topic "3D Modeling Beneath Ground: Plant Root Detection and Reconstruction Based on Ground-Penetrating Radar". We also have presented the research outcomes (titled "3D Crop Structure Modeling Based on Mobile Platforms") in 6 seminars toboth industries (Google 2021) and universities (UF's Department of Soil, Water, and Ecosystem Sciences2022;UF Southwest Florida Research and Education Center 2022;UGADepartment of Electrical and Computer Engineering 2022; UGAPhenomics and Plant Robotics Center2021; Purdue UniversityDepartment of Agronomy2021). We also gave a presentation "Underground Contributions to Declining Blocks in High-Density Orchards." at a summer tour organized by Cornell Cooperative Extension-Lake Ontario Fruit team in 2021. What do you plan to do during the next reporting period to accomplish the goals?We will continue making progress towards specific objectives as outlined in the proposal. Below are a few specific tasks forthe next reporting period: 1) More accurately reconstruct the 3D root models on both virtual and real-world datasets. 2) Develop algorithms to extract more traits from the 3D root models, including 3D root skeletons,angles, curvatures, coarseness; diameter. 3) Complete the scanning, imaging, and analysis of the root system and root traits of the trees excavated from commercial orchards and sweet potato fields. 4) Evaluate virus status, root system, and root traits of grafted apples and sweet potatoes over time in the greenhouse and research crop fields. 5) Perform multivariate analysis to identify the role of viruses, root traits, and their interaction in apple tree health and tree vigor, and sweet potato 6) Continue communication of results through presentations at extension and scientific meetings and publications.

      Impacts
      What was accomplished under these goals? In general, we have made significant progress towards the main goal, and four specific objectives of the project and tasks during this reporting period. We have detailed progress report with pictures, figures, graphs, and tables but unfortunately, online reporting system does not allow us to attach files. But in our published and in-submission papers, more details are listed. Objective 1: Design a mobile AI platform based on unsupervised deep neural networks to reconstruct 3D plant root systems. 1) Plant roots are the main conduit to its interaction with the physical and biological environment. A 3D root system architecture can provide fundamental and applied knowledge of a plant's ability to thrive, but the construction of 3D structures for thin and complicated plant roots is challenging. We propose an unsupervised learning scheme to estimate the root depth from only one image as input, which is further applied to reconstruct the complete root system. The results on both the real plant root dataset and the synthetic dataset demonstrate the effectiveness of the proposed algorithm compared with state-of-the-art image based 3D reconstruction models on plant roots. 2) We also develop a structure-from-motion based deep neural network structure for plant root reconstruction in a self-supervised manner, which can be applied by mobile phone platforms. IMU unit in the mobile phone is further utilized to improve the pose estimation network by continuously updating the correct scales from the gyroscope and accelerometer moment. Our proposed approach is able to solve the scale ambiguity in recovering the absolute scale of the real plant roots so that the approach can promote the performance of camera pose estimation and 3D root reconstruction jointly. 3) Recovering 3D shapes of hidden and buried objects remains to be a challenge. Ground Penetrating Radar (GPR) is among the most powerful and widely used instruments for detecting and locating underground objects such as plant roots and pipes, with affordable prices and continually evolving technology. This paper first proposes a deep convolution neural network-based anchor-free GPR curve signal detection network utilizing B-scans from a GPR sensor. Furthermore, a graph neural network-based root shape reconstruction network is designated in order to progressively recover major taproot and then fine root branches' geometry. Our results demonstrate the potential of using the proposed framework as a new approach for fine-grained underground object shape reconstruction in a non-destructive way. Objective 2: Develop root trait extraction algorithms based on 3D root models. Based on the reconstructed 3D models, the root volume and size can be more accurately measured in comparison with 2D images. Based on the occupied 3D space, we can estimate the rough volume and biomass of the images. The size of the roots is measured based on the reconstructed root dimension. Single root length is measured based on the counting number of 3D points in the root branch. We are developing new algorithms that can detect the 3D root skeletons that can directly infer many other traits such as 3D angles, curvatures, spreading patterns, etc. Objective 3: Validate the mobile APP for root trait extraction on fruit, root and tuber crops. 1) We have created virtual roots with absolute ground truth that can facilitate the algorithm development and precise validation. Compared with plants above the ground, roots are very difficult to observe because they are buried under the ground and maintain the extremely complicated shapes. We created a dataset containing two kinds of plant roots, namely the root of apple trees and the root of sweet potato. Apple trees have thick and scattered roots with complex shapes, and the root of sweet potato are relatively thin and scattered. Our dataset contains 800 3D models of apple tree root and 400 3D models of sweet potato root. In addition, we rotated the 3D model of each root by an angle every 10 degrees in the vertical and horizontal axes, then samples its 2D image. Therefore each 3D model in our dataset also contains 72 2D images from different angles. Furthermore, our dataset has ground truth compared to the other root datasets, which can facilitate researchers to test their 3D reconstruction algorithms. 2) We have captured large number of root images of young apple rootstocks and sweet potato before planting in the greenhouse and field. This image data was made available to the project PI to download and use for objectives 1, 2 and 3 of the project. Apple tree root samples were collected from two commercial apple orchards in New York State (USA). The orchard A (Wayne County, NY) is planted in 2017 and has 'HoneyCrisp' apples grafted onto the Mailling 9 (M.9 NIC29) rootstock with 1.25m x 3m spacing between trees and rows, respectively. The orchard B (Saratoga County, NY) is planted in 2013 and has 'Fuji' grafted onto the B.9 rootstock with 1.5m x 4m spacing between trees and rows. A total of 31 root systems were dug with shovels by tracing a circumference of 40 cm from the tree trunk with a depth of approximately 35 cm. Root systems were stored in a cool and humid warehouse until processing. Images of the entire root system were taken by rotating the root system every 10 degrees until a 360-degree rotation was completed. A Canon EOS Rebel T5 camera was used to photograph the roots. Then, secondary roots were manually counted, and root diameters were measured using a digital caliper. In addition, secondary roots were cut into small pieces and scanned in an Expression 12000XL scanner. The scanner was set to obtain 16bit grayscale images with 600dpi resolution using the transparent unit. Finally, the root samples were dried at 80°C for five days to obtain dry weight of roots. The images of the whole root system were analyzed using ImageJ software version 1.8. The picture scale was set by converting pixels on mm using the scale on the picture. Width and depth of the root system were also measured using ImageJ tool, root angles could be also measured using Image. Scanned images were analyzed using RhizoVission Explorer version 2.0.3. RhizoVision provided maximum diameter, total area, total length, and total volume. We also scanned 8 sweet potato roots in their 16 days, 20 days, and 23 days. The sweet potatoes are captured in videos by a mobile phone camera. The sweet potatoes are hung on the roof and fixed. The camera moves to capture 360 degrees video. A black/gray background is provided to facilitate sweet potato segmentation. 3) We also have setup a new experiment in the greenhouse, and research orchard at Cornell AgriTech to represent high-density apple orchard. We have already captured root images of young apple rootstocks before planting. In this experiment, we have 'HoneyCrisp' grafted onto the G.935 rootstock and M9.Nic9 rootstock in 40 replications. A Canon EOS Rebel T5 camera was used to photograph the root system of each plant before planting in the research orchard and greenhouse at Cornell AgriTech. In the orchard, plant and row distance was kept as 0.7m and 3m, respectively, to represent high-density apple orchard. We have planted this experiment in 3 blocks for destructive harvesting every 4-5 months to assess the root system manually and digitally to validate 3D algorithms. Objective 4: Outreach and dissemination of results to the scientific community. PI Guoyu Lu has presented our work at multiple conferences, companies, and universities. We present our work on WACV conference, which has over 600 attendees. The PI also presentedthe outcomes of the project to Google Wing, UGA, UF, Purdue through invited seminars and other informal talks. Awais Khan presented "Underground Contributions to Declining Blocks in High-Density Orchards." at a summer tour organized by Cornell Cooperative Extension-Lake Ontario Fruit team on August 12, 2021

      Publications

      • Type: Journal Articles Status: Published Year Published: 2021 Citation: Yawen Lu, Yuxing Wang, Devarth Parikh, Awais, Khan, Guoyu Lu, Simultaneous Direct Depth Estimation and Synthesis Stereo for Single Image Plant Root Reconstruction, IEEE Transactions on Imaging Processing, 2021
      • Type: Conference Papers and Presentations Status: Published Year Published: 2022 Citation: Yawen Lu, Guoyu Lu, 3D Modeling Beneath Ground: Plant Root Detection and Reconstruction Based on Ground-Penetrating Radar, IEEE Winter Conference on Applications of Computer Vision (WACV) 2022