Source: UNIV OF CALIFORNIA submitted to NRP
DSFAS: AI-BASED FAST AUTOMATED 3D SCENE RECONSTRUCTION OF PLANTS AND FARMS FOR AGRICULTURAL ROBOTS
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
ACTIVE
Funding Source
Reporting Frequency
Annual
Accession No.
1032345
Grant No.
2024-67021-42528
Cumulative Award Amt.
$727,999.00
Proposal No.
2023-11702
Multistate No.
(N/A)
Project Start Date
Aug 1, 2024
Project End Date
Jul 31, 2028
Grant Year
2024
Program Code
[A1541]- Food and Agriculture Cyberinformatics and Tools
Recipient Organization
UNIV OF CALIFORNIA
(N/A)
LOS ANGELES,CA 90095
Performing Department
(N/A)
Non Technical Summary
Precision agriculture holds immense potential for improving farming practices, but current methods for monitoring and managing crops are often outdated and labor-intensive. Our project aims to address this challenge by developing an integrated software and hardware system for 3D reconstruction of farming environments. Led by a team of experts from the University of California, Los Angeles, and North Dakota State University,we will design, develop, and field-test an unmanned ground vehicle (UGV) equipped with advanced imaging capabilities. By autonomously capturing images from various camera angles, the UGV will enable farmers, agronomists, and roboticists to create virtual reality (VR) environments of farms, allowing them to visualize and interact with their crops in 3D.This innovative system will allow users to specify various parameters such as crop type, time of day, and season, and the VR environment will be able to emulate "time," generating dynamic 3D scenes representing the entire life cycle of crops. We envision three main application areas for this project: first, providing a complete package for robots and sensors used in precision agriculture, thus facilitating testing and deployment of robotic systems in real-world farming environments; second, offering a non-invasive tool for phenotyping in plant breeding and precision agriculture through automated 3D reconstruction of entire plants and farms; and third, enabling remote work for breeders and agronomists by providing accurate 3D representations of farming environments.Through collaboration with farmers and industry stakeholders, our project aims to advance knowledge and adoption of precision agriculture technologies. By providing new tools and techniques for monitoring and managing crops, we hope to improve farm productivity, efficiency, and sustainability, ultimately contributing to a more sustainable and environmentally friendly agricultural industry.
Animal Health Component
25%
Research Effort Categories
Basic
50%
Applied
25%
Developmental
25%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
4027410202050%
2052499208050%
Goals / Objectives
The major goals of this project can be summarized as follows:Design, development, and field-testing of an integrated software and hardware system: The primary goal of the project is to create an integrated system comprising software (computer vision, machine learning) and hardware (unmanned ground vehicle) to facilitate 3D reconstruction of farming environments. This system will enable users such as farmers, agronomists, and roboticists to create virtual reality (VR) environments of farms.Dynamic 3D scene generation: The project aims to develop a system that can generate dynamic 3D scenes representing the entire life cycle of crops. Users will be able to specify various parameters such as the type of crop, time of the day, season, etc. Importantly, the VR environment will be able to emulate "time", providing users with a visual representation of the crop's entire life cycle.Addressing the gap in robotics and precision agriculture: The project aims to fill the gap in the testing of robots and sensors used in precision agriculture by providing a complete package for simulation environments. This will enable testing and validation of robotic systems and sensors before deployment in real-world farming environments.Developing a fully automated robotic platform for 3D reconstruction: The project seeks to develop a fully automated robotic platform capable of creating 3D scenes of entire farms. This will provide an emerging non-invasive tool for phenotyping in plant breeding and precision agriculture.Enabling remote work for breeders and agronomists: With the advancement of technology, the project anticipates that breeders and agronomists will often work from remote locations in the future. By providing an accurate 3D representation of farming environments, the project aims to enable remote tasks such as quantification of phenotypic traits.The project will be led by a qualified team comprising PD Jawed (expertise: robotics for precision agriculture), co-PD Joo (expertise: computer vision, virtual reality, and AI) from UCLA, and co-PD Rahman, an agronomist at NDSU. Additionally, the team will collaborate with farmers and stakeholders from the industry for field trials, ensuring the relevance and applicability of the developed system in real-world farming scenarios.
Project Methods
The project will be conducted in several stages, including design, prototyping, testing, and evaluation of an integrated software and hardware system for creating 3D scenes of plants and farms that evolve with time. The general scientific methods will involve a combination of computer vision, machine learning, robotics, and virtual reality (VR) techniques. Unique aspects of the project include the development of a fully autonomous unmanned ground vehicle (UGV) with a manipulator for image capture, as well as the integration of time-based 3D scene evolution in the virtual reality environment.Project Phases:Design and Prototyping:Software Development: Design and development of computer vision and machine learning algorithms for image processing and 3D reconstruction.Hardware Design: Design and prototyping of an unmanned ground vehicle (UGV) with a manipulator for autonomous image capture.Integration: Integration of software and hardware components to create a unified system for 3D scene capture and reconstruction.Testing and Optimization:Field Testing: Testing the integrated system in real-world farming environments to assess performance and reliability.Optimization: Iterative optimization of algorithms and hardware components based on testing results.Data Collection and Analysis:Image Capture: Autonomous image capture by the UGV at various camera angles using the robotic manipulator.3D Reconstruction: Reconstruction of farm environments and plants in a virtual reality (VR) environment using computer vision techniques.Time-based Evolution: Incorporation of time-based evolution of 3D scenes to simulate plant growth and farm activities over time.Evaluation and Impact Assessment:Change in Knowledge:Formal and informal educational programs will be conducted to disseminate project findings to target audiences, including farmers, agricultural professionals, roboticists, and educators.Workshops, training programs, and outreach activities will be organized to increase awareness and understanding of the project outcomes.Change in Action:Adoption of precision agriculture technologies and practices will be measured through surveys, interviews, and assessments of farm management practices.Changes in behavior and practices among target audiences will be assessed through pre- and post-project surveys and interviews.Change in Condition:Improvement in farm productivity, efficiency, and sustainability will be evaluated through quantitative measures such as yield increase, resource savings, and environmental impact.Enhanced research capabilities and innovation in the agricultural sector will be assessed through metrics such as publications, patents, and technology transfer activities.Evaluation Plan:Milestone Evaluation:Evaluation of project milestones and deliverables, including software prototypes, hardware prototypes, and field testing results.Assessment of project progress against timeline and budget.Performance Evaluation:Evaluation of the performance and reliability of the integrated software and hardware system in real-world farming environments.Assessment of the accuracy and efficiency of 3D scene reconstruction and time-based evolution.Impact Assessment:Evaluation of the impact of the project on target audiences, including changes in knowledge, actions, and conditions.Measurement of the adoption of precision agriculture technologies and practices among farmers and agricultural professionals.Assessment of the contribution of the project to research innovation and technology transfer in the agricultural sector.Data Collection:Quantitative data collection through surveys, interviews, and assessments of farm management practices.Qualitative data collection through case studies, focus groups, and expert opinions.Longitudinal data collection to assess the long-term impact of the project on target audiences and the agricultural sector.

Progress 08/01/24 to 07/31/25

Outputs
Target Audience:During the first year of the project, our efforts involved a diverse audience including researchers, students, industry professionals, and agricultural extension specialists. Specifically, we have reached the following groups: Research Community (Robotics, Computer Vision, and Precision Agriculture) We have prepared a total of five scholarly articles, all of which are undergoing review. The pre-print of one of the articles is available on arXiv (https://arxiv.org/abs/2412.03472). This paper shares our progress in 3D reconstruction for precision agriculture. More details on our publications are provided later in this report. Agricultural Extension Specialists and Practitioners We built two unmanned ground vehicles (UGVs) and shipped them to our collaborator North Dakota State University (NDSU). Two graduate students, one postdoctoral researcher and the PI travelled to NDSU for a one-week trip. They trained the NDSU students to operate the robots and the robots have been collecting data for 3D reconstruction since July 2, 2025. One of the robots is imaging a trial plot of canola multiple times a day. This has led to a dataset (10s of TB) that captures the growth stages of canola - a valuable tool for further research in 3D reconstruction and autonomous navigation. We conducted a site visit to the University of Nevada, Reno (Las Vegas), where we engaged with professors and extension specialists to discuss agricultural data collection methodologies and practical applications of our system. We initiated collaborations with Matt Conroy (GoodFarms) and Andre Biscaro (Extension Specialist, Ventura County, CA) to align our project with real-world farming needs. Undergraduate Students (STEM Education and Workforce Development) Five undergraduate students at UCLA have been actively involved in designing and building a robotic data collection setup. This hands-on experience provides them with interdisciplinary training in robotics, computer vision, and agricultural technology, contributing to workforce development in precision agriculture and autonomous systems. Industry Stakeholders (Robotic Hardware and Precision Agriculture Solutions) We procured a robotic platform from Indro Robotics, engaging with industry suppliers to explore the integration of autonomous robotic solutions for agricultural applications. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?The project has provided valuable training and professional development opportunities for undergraduate and graduate students, as well as fostering interdisciplinary collaboration. Five undergraduate students at UCLA are actively involved in the project through the Undergraduate Research Center, which supports research participation without requiring financial allocation from the project budget. These students are gaining hands-on experience in robotics, computer vision, and agricultural sensing, equipping them with technical skills relevant to precision agriculture and autonomous systems. A postdoctoral researcher specializing in computer vision joined the PD's lab on February 1, 2025. He has been supervising undergraduate students and collaborating with graduate students. The project has facilitated international collaboration through an exchange program. Two visiting graduate students have been working with the PD, bridging expertise between a robotics and engineering-focused lab and a computer vision lab. This collaboration enhances the interdisciplinary nature of the research and strengthens technical exchanges between different research domains. The software developed under this project, Measure Anything, is available as an open-source tool on GitHub (https://github.com/StructuresComp/measure-anything). This resource provides an opportunity for students, researchers, and professionals to learn about computer vision-based measurement techniques, segmentation, and 3D reconstruction, making it a valuable tool for education and skill development. All of our papers (five pre-prints in total) will be accompanied by a publicly accessible dataset and/or software. The links to the publicly accessible repositories will be made available once they are published. How have the results been disseminated to communities of interest?The results of this project have been disseminated primarily through open-access preprints and open-source software releases, ensuring broad accessibility to researchers, industry professionals, and the agricultural community. A key dissemination effort is the preprint of our research paper Measure Anything, available on arXiv (https://arxiv.org/abs/2412.03472), which introduces a vision-based framework for dimensional measurement with applications in plant phenotyping and robotic automation. The corresponding codebase has been released on GitHub (https://github.com/StructuresComp/measure-anything), enabling researchers and practitioners to adopt and build upon our methods. In addition to Measure Anything, four additional papers have been publicly released and are currently under peer review at top computer vision and AI conferences. These include: (1) HiddenObject: Modality-Agnostic Fusion for Multimodal Hidden Object Detection, which improves detection under occlusion using RGB, thermal, and depth fusion; (2) NTRSplat: Multimodal Gaussian Splatting for Agricultural 3D Reconstruction, accepted to the CVPR 2025 Workshop on Neural Fields, which introduces a multimodal dataset and robust 3D reconstruction pipeline; (3) Reconstruction Using the Invisible, which leverages near-infrared data and vegetation indices to enhance 3D modeling; and (4) DePT3R, which proposes a novel method for dense point tracking and reconstruction in dynamic scenes without requiring camera calibration. Collectively, these dissemination efforts are advancing the fields of computer vision, robotics, and precision agriculture through transparent, accessible, and reusable contributions. What do you plan to do during the next reporting period to accomplish the goals? To further advance the project goals, we will focus on integrating hardware and software components, conducting field and indoor experiments, and refining robotic navigation algorithms to improve automated data collection. A major priority is building an automated robotic data collection setup capable of operating in field conditions. We plan to deploy this system in Fargo, North Dakota, where it will collect multi-day datasets of crop environments. These field trials will provide real-world validation of our 3D reconstruction framework and help refine our system for large-scale agricultural applications. In parallel, we are developing a robotic setup for indoor data collection using indoor plants at UCLA. This controlled environment will allow us to systematically test and optimize our methods before field deployment. We plan to create a high-quality dataset for 3D reconstruction and release it alongside a publication, making it a valuable resource for researchers in precision agriculture and computer vision. The indoor experiments at UCLA will serve as a crucial testbed for our outdoor fieldwork in Fargo. By first validating our approach in a controlled setting, we will ensure that our methodologies are robust and well-adapted for real-world deployment. Additionally, we are working on autonomous navigation algorithms to enhance the reliability of robotic data collection. Robust navigation is essential for ensuring that the robot can operate autonomously in complex farming environments without human intervention. These improvements will contribute to the broader goal of developing a fully automated system for 3D reconstruction and phenotyping in precision agriculture.

Impacts
What was accomplished under these goals? Advancing Computer Vision for Precision Agriculture A major milestone in the first year of the project was the development of "Measure Anything", a vision-based framework for dimensional measurement of objects with circular cross-sections. This work is documented in a preprint available on arXiv (https://arxiv.org/abs/2412.03472). The framework integrates segmentation, mask processing, skeletonization, and 2D-3D transformation, aligning with the goal of developing robust software tools for 3D reconstruction. Validating Vision-Based Measurement for Agricultural Applications The Measure Anything framework was applied to estimate the diameters of Canola stems, a key phenotypic trait correlated with plant health and yield. The analysis leveraged real-world agricultural data collected from fields in North Dakota, directly supporting the objective of enabling non-invasive phenotyping for plant breeding. This work serves as a foundation for future applications in automated 3D reconstruction, as accurate object measurements are essential for generating precise farm-scale virtual models. Multimodal Object Detection in Complex Farm Environments We prepared a paper (under review) titled "HiddenObject: Modality-Agnostic Fusion for Multimodal Hidden Object Detection" that addresses the challenge of detecting occluded or concealed objects in visually degraded agricultural settings. In this paper, we introduce a fusion framework that integrates RGB, thermal, and depth data using a Mamba-based architecture to improve detection performance under occlusion, camouflage, and lighting variation. This method significantly outperforms unimodal approaches, supporting our project's goal of building robust vision systems for use in field conditions with incomplete or noisy visual information. Improving 3D Reconstruction Using Multimodal Agricultural Data We prepared a paper (accepted to CVPR 2025 Workshop) titled "NTRSplat: Multimodal Gaussian Splatting for Agricultural 3D Reconstruction" that advances 3D reconstruction in challenging outdoor farming environments. In this paper, we introduce NTRPlant, a novel dataset with Near-Infrared (NIR), RGB, LiDAR, depth, and metadata, and present a new Gaussian splatting method that leverages cross-attention and positional encoding. The model effectively handles occlusions and lighting variations and outperforms existing baselines, aligning with our objective to generate high-fidelity 3D models of crop scenes. Leveraging Spectral Data for High-Fidelity Farm Modeling We prepared a paper (under review) titled "Reconstruction Using the Invisible: Intuition from NIR and Metadata for Enhanced 3D Gaussian Splatting" that focuses on using NIR imagery and vegetation indices to improve 3D reconstruction quality. In this paper, we enhance the splatting framework with inputs such as NDVI, NDWI, and chlorophyll indices to infer structure in visually ambiguous areas. This contributes to our goal of generating dynamic and accurate virtual representations of crops, even when RGB data is limited or unreliable. Real-Time 3D Reconstruction and Tracking for Robotic Platforms We prepared a paper (under review) titled "DePT3R: Joint Dense Point Tracking and 3D Reconstruction of Dynamic Scenes in a Single Forward Pass" that introduces a method for simultaneously reconstructing and tracking dynamic scenes without requiring camera calibration. In this paper, we propose a fast, unified framework that works on unposed image collections and outputs dense 3D reconstructions in a single pass. This enables real-time data collection for robotic systems operating in dynamic, unstructured farming environments, directly supporting our hardware-software integration goal. Field Trials in Summer 2025 and Engagement with Stakeholders We built two unmanned ground vehicles (UGVs) and shipped them to our collaborator North Dakota State University (NDSU). Two graduate students, one postdoctoral researcher, and the PI traveled to NDSU for a one-week visit to train local students in robot operation. Since July 2, 2025, the robots have been actively collecting data for 3D reconstruction, including high-frequency imaging of a canola trial plot. This effort has produced a large-scale dataset (tens of terabytes) capturing the crop's growth stages - an essential resource for advancing 3D reconstruction and autonomous navigation. We also conducted a site visit to the University of Nevada, Reno (Las Vegas), engaging with professors and extension specialists. Additionally, we initiated collaborations with Matt Conroy (GoodFarms) and Andre Biscaro (Extension Specialist, Ventura County, CA), resulting in a preliminary data collection plan for 2025 aligned with real-world agricultural needs.

Publications