Source: OREGON STATE UNIVERSITY submitted to
ROBOTIC PRUNING IN MODERN ORCHARDS
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
EXTENDED
Funding Source
Reporting Frequency
Annual
Accession No.
1023398
Grant No.
2020-67021-31958
Project No.
OREPRU20
Proposal No.
2019-06458
Multistate No.
(N/A)
Program Code
A1521
Project Start Date
Jul 15, 2020
Project End Date
Jul 14, 2024
Grant Year
2020
Project Director
Davidson, J. R.
Recipient Organization
OREGON STATE UNIVERSITY
(N/A)
CORVALLIS,OR 97331
Performing Department
CE Indust/Mnfctr Engineering
Non Technical Summary
Pruning - a critical perennial operation required to maintain tree health and produce high yields of quality fruit - is one of the most labor-intensive orchard activities in the production of high-value tree fruit crops. As the fresh market tree fruit industry continues to face the challenge of an uncertain labor force, the development of robotic technologies that are able to perform labor-intensive field operations - like pruning - will play a crucial role in its long-term sustainability. Automating selective pruning is a complex problem requiring high-resolution sensing, complex manipulation, and advanced decision making on determining which branches to prune. The specific research objective of this project is to develop a robotic system for autonomous, dormant pruning of fruit trees. To accomplish our objective, we have formed an experienced, interdisciplinary project team with expertise in horticulture, agricultural mechanization, computer graphics, and robotics.Our approach builds on preliminary investigations, which indicate that machine vision systems can be used to analyze the manual pruning process to i) formulate pruning rules, and ii) identify pruning locations for autonomous pruning of fruit trees. Specifically, our approach is to first build high-quality digital models of the trees offline using a combination of artificial intelligence techniques and human intervention. Second, we use the knowledge gained from studying the manual pruning process to formulate pruning rules, which can be 'practiced' on the digital models and evaluated for their effectiveness. Third, we will use state-of-the-art machine learning algorithms to map pruning plans back to the real world in order to reduce onsite computational demands. Fourth, we will design a custom end-effector that can localize itself in real time in order to safely and reliably perform the pruning cuts. We expect that the proposed robotic pruning system will help sustain the global competitiveness of the US tree fruit industry. In addition, consumers will benefit through increased access to premium quality fruit.
Animal Health Component
0%
Research Effort Categories
Basic
25%
Applied
50%
Developmental
25%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
40211192020100%
Goals / Objectives
The long-term goal of our research team is to increase yields of premium quality fruit, improve production efficiency, and reduce dependency on human physical labor through automation and robotics. The specific goal of this research project is to establish the feasibility of using a robotic system for dormant pruning of fruit trees, substantially reducing the human labor involved. This project has the following three objectives:Create a collaborative human-robot training method for intelligent pruningCreate digital tree growth models that can be used both for developing pruning rules and offline path planningCreate an integrated perception and manipulation framework for fruit tree pruning
Project Methods
We will analyze manual pruning activities performed by domain experts (e.g. experienced workers, growers, and horticulturalists) to understand pruning patterns. We will then create an interactive, digital traning evnironment where automated rule sets can be iteratively trained and evaluated, using the manual data for training.We will use empirical data gathered throughout the growing season - via high-resolution, labeled scans of the trees - as input to the development of a synthetic digital tree model. After learning parameters that yield correct tree growth models, we will use the digital models to improve the pruning rules by predicting the resulting growth patterns.Using the digital models and pruning rules, we will plan efficient pruning paths (offline). We will develop a perception system that maps this plan back to the real world, grounding pruning points in the robot's coordinate frame. We will also design a custom end-effector that is capable of performing the final localization and cut using a lightweight perception system that is mounted on the end-effector itself.After simulations, functionality tests, and laboratory experiments, we will conduct field evaluations of the integrated robotic system at both WSU's Roza Research Orchard and commerical orchards. We will assess the system's accuracy by comparing the results of robotic pruning with that of trained workers. Performance measures to be used for the comparison include the percentage of successful pruning occurrences, Pruning Branch Proportion, and resulting branch spacing and branch length. We will use overall cycle time to evaluate pruning throughput. Finally, the horticultural effects of pruning cuts (e.g. vegetative growth and distribution of fruiting sites) will also be assessed after robotic pruning.

Progress 07/15/22 to 07/14/23

Outputs
Target Audience:The target audience for this reporting period was the robotics research community and the tree fruit industry. During the prior year, we focused on disseminating our research results through technical conferences and presentations at industry and outreach events. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?For this performance period, the project provided research experience for two PhD students. How have the results been disseminated to communities of interest?The results have been disseminated through one submission to an agricultural robotics workshop, one submission to an international precision agriculture conference, and two journal submissions. There were also presentations at an industry event (1x), agricultural robotics workshop (1x), outreach event to the general public (1x), as well as the OSU robotics seminar (1x): J.R. Davidson, FIRA USA, "Robotic orchard pruning development," Fresno, CA, October 2022 J.R. Davidson, Corvallis Science Pub, "Where are the orchard bots? Thirty years of research on robotic fruit harvesting," Corvallis, OR, November 2022 J.R. Davidson, Oregon State University Robotics Seminar, "Towards tactile orchard robotics," Corvallis, OR, May 2023 C. Grimm, ICRA Workshop on TIG-IV: Agri-Food Robotics-From Farm to Fork, "Reach out and touch: Transitioning from visual to tactile perception for manipulation in orchards," London, United Kingdom, June 2023 What do you plan to do during the next reporting period to accomplish the goals?Our primary objectives for the next reporting period are the following: Validate 3D skeletonization using the dataset collected in January 2023 Complete the fruit bud detection module and integrate with the existing skeletonization model Complete and evaluate a 3D Follow the Leader controller that can selectively scan vertical limbs in 3D, detect prunable sidebranches, and then execute cuts on them Establishment of performance metrics, system integration, and field trials in a commercial fruit orchard

Impacts
What was accomplished under these goals? Fruit load estimation Over the prior year we created a framework for estimating the potential fruit load that a branch can bear. Through a literature review, we determined that the geometric features of a branch can provide valuable insight into determining its optimal load capacity. This information can greatly assist in pruning and thinning projects, where the goal is to optimize fruit production. In parallel, we made significant progress in 3D skeletonization for tree structures. Our approach begins by scanning the tree to capture its 3D representation. We then employ semantically-based classification and labeling algorithms to identify and differentiate the trunk and branches within the point cloud. By extracting geometric features for each structure, we gain a comprehensive understanding of the tree's overall structure. Modeling and understanding the intricate structure of trees is important for accurately localizing potential pruning points. Additionally, we are actively working on developing methods for apple fruit bud detection. This aspect of our research aims to provide insights for decision-making processes related to pruning activities. Digital tree modeling Over the prior year we also made progress in using LPy to create digital tree models. However, further progress has been delayed by two factors. The first is determining how to formalize "human" operations (e.g. tying down branches) in ways that can be easily translated to the language of L-systems. This is complex because L-systems must be interpreted in order to yield their geometry, and most human operations are done in the geometric space. Developing a suitable framework for applying operations on the interpreted system and then translating them back is nontrivial. The other factor is that additional work is required to determine what constitutes "acceptable" geometry/growth mechanics. Parallel research efforts within the AI Institute for Transforming Workforce & Decision Support (AgAID) have assumed responsibility for digital tree modeling. Manipulation Regarding our perception and manipulation framework, we made significant progress since the previous field trials in March 2022. Our prior demonstration showed that pruning using just 2D data was indeed possible, but the system had various limitations, including no depth perception, a slow fixed-waypoint scanning procedure, and a visual servoing framework that was not robust. Various improvements have been made to address these issues. For the scanning procedure, in January 2023 we demonstrated the ability of the robot to follow a dynamic path up and down a leader branch by fitting a curve to the segmented mask and outputting a control velocity. We are further expanding this framework to incorporate modeling the 3D structure of the branches using just the 2D camera feed, enabling the robot to keep a set distance away from the leader and to estimate the orientation of the branches to be cut.

Publications

  • Type: Journal Articles Status: Under Review Year Published: 2023 Citation: D. Ahmed, R. Sapkota, M. Churuvija, and M. Karkee, Machine vision-based crop-load estimation using YOLOv8
  • Type: Conference Papers and Presentations Status: Published Year Published: 2023 Citation: N. Parayil, A. You, C. Grimm, and J.R. Davidson, Follow the leader: A path generator and controller for precision tree scanning with a robotic manipulator, In Proc. 14th European Conf. on Precision Agriculture (ECPA), Bologna, Italy, July 2023, pp. 167-174.
  • Type: Other Status: Published Year Published: 2023 Citation: A. You, J. Hemming, C. Grimm, and J.R. Davidson, Branch orientation estimation using point tracking with a monocular camera, in IEEE Intl Conf. on Robotics and Automation (ICRA) Workshop on: TIG-IV: Agri-Food Robotics-From Farm to Fork, London, United Kingdom, June 2023, 2pp.


Progress 07/15/21 to 07/14/22

Outputs
Target Audience:The target audience for this reporting period was the robotics research community and the tree fruit industry. During the prior year, we focused on disseminating our research results through technical conferences and presentations at grower extension events. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?For this performance period, the project has provided research experience for three PhD students. How have the results been disseminated to communities of interest?The results have been disseminated through two submissions to international robotics conferences, one submission to an international horticulture congress, and one journal submission. There were also two invited talks at tree fruit industry/extension events during the prior performance period: J.R. Davidson, U.S. Highbush Blueberry Council + North American Blueberry Council Technology Symposium, "Recent advances in robotic pruning," Salem, OR, September 2021 J.R. Davidson, OSU Cherry Day, "Recent work on robotic pruning," Virtual, February 2022 What do you plan to do during the next reporting period to accomplish the goals?For the upcoming year, we would like to improve the components of the existing framework, such as increasing the accuracy of the hybrid controller and increasing the throughput of the hardware. In addition, we will also focus on the development of a digital twin orchard model. Our intent is to build realistic models of spindle apple trees using L-Systems. The digital twin will allow us to pursue several avenues of new research. First, we would like to have a pruning system that can autonomously determine pruning points given an input tree model. The digital twin will enable us to simulate the effects of various pruning rules on the growth of the tree and ultimately find a pruning rule set that optimizes some metric of interest, e.g. fruit yield. The digital twin will also allow us to analyze the effect of manipulator design on the reachability of pruning points on a typical apple tree. The ultimate goal of this avenue of research would be to execute an optimization procedure to produce a manipulator design with the best-posed kinematics for pruning, alleviating manipulability concerns that we encountered with the previous hardware setup. In the coming year, the WSU team plans to continue developing the perception framework to create a time-efficient and accurate algorithm for analyzing overall tree structures and their geometric attributes. This will then be combined with pruning rules to identify pruning branches and estimate pruning locations. We will also investigate tree bud detection algorithms and integrate them with our perception framework to see if this information improves the decision-making process for dormant pruning.

Impacts
What was accomplished under these goals? The past year of work focused primarily on improving sensing techniques to extract useful tree features, integrating perception and manipulation algorithms in a robot controller, and conducting pruning field trials in realistic outdoor conditions. The performance of a robotic pruner depends heavily on its ability to construct accurate 3D models of trees to select cutting points and drive the cutting tool to the right location. However, the unstructured nature of tree growth hinders fixed-position imaging systems to accomplish the task because occluded portions of the tree can't be captured. To address this issue, the WSU team developed a novel imaging technique using coordinated cameras. A fixed-position imager generates an initial coarse 3D point cloud of the tree while an imaging subsystem dynamically refines the model from close range using different poses to overcome occlusions. To evaluate system performance, a wide range of branch diameters, side-branch spacing between branching points and side-branch lengths were manually measured in the fine point clouds and compared with ground truth. The analysis suggests the technique has potential to be applied in automated tree pruning and could also be helpful in robotic thinning and harvesting in different tree crops and architectures. Another priority over the prior year was automating pruning decisions using image-based techniques. In the Upright Fruiting Offshoot (UFO) architecture there are two main pruning rules: 1) prune all lateral shoots from upright offshoots (or leaders), and 2) remove the most vigorous leader every year for renewal. Practically, the leader with the largest diameter at its base is a good candidate for removal (i.e. pruning rule #2). Using an active lighting stereovision system developed by collaborators at Carnegie Mellon University, a Mask R-CNN computer vision model was trained to directly detect leaders without segmentation or pre-processing. From there, the diameter was estimated for each leader at its base. The leader with the largest measured diameter at its base was selected for removal. This Mask R-CNN correctly identified the largest diameter leader 94% in test images when compared to ground truth annotations. Our robot combines segmentation with a hybrid controller to execute pruning cuts. Foreground segmentation is an important task since we only wish to operate on trees that are right in front of the robot, and in high-density orchard systems foreground branches can be difficult to distinguish from background rows. The hybrid controller uses this segmented environment representation to guide the cutters towards the branch to be pruned. Once contact between the cutter and branch is detected, the controller switches over to a force-based admittance controller to handle the interaction of the robot with the environment and guide the branch into the inside of the cutter. By using image-based control, the system is robust to a variety of sources of error. We integrated our algorithms into a complete autonomous pruning system that operates by scanning a predefined region of the tree and detecting prunable branches. The scanning procedure involves moving the robot arm through a sequence of predefined joint positions. At each position, we acquire images and use the segmentation framework to generate a foreground mask. We then feed this mask through a Mask R-CNN network to generate pruning point detections in the image (i.e. pruning rule #1). Once these image points are turned into 3D estimates, we move the robot arm to an approach position for each detected pruning point and run the hybrid controller to guide the branch into the cutter and prune it. We evaluated the system at a UFO sweet cherry orchard (Prosser, WA) in March 2022, ultimately achieving a cutting success rate of 58%.

Publications

  • Type: Conference Papers and Presentations Status: Awaiting Publication Year Published: 2022 Citation: A. You, H. Kolano, N. Parayil, C. Grimm, and J.R. Davidson, Precision fruit tree pruning using a learned hybrid vision/interaction controller, in Proc. IEEE Intl Conf. on Robotics and Automation (ICRA), Philadelphia, PA, May 2022, pp. 2280-2286.
  • Type: Journal Articles Status: Under Review Year Published: 2022 Citation: A. You, N. Parayill, J. Gopala Krishna, U. Bhattarai, R. Sapkota, D. Ahmed, M. Whiting, M. Karkee, C.M. Grimm, and J.R. Davidson, An autonomous robot for pruning modern, planar fruit trees, IEEE Robotics and Automation Letters, 2022.
  • Type: Conference Papers and Presentations Status: Accepted Year Published: 2022 Citation: J.R. Davidson, A. You, N. Parayil, J. Gopala Krishna, M. Whiting, M. Karkee, and C. Grimm, Recent work on robotic pruning of upright fruiting offshoot cherry systems, III International Symposium on Mechanization, Precision Horticulture, and Robotics: Precision and Digital Horticulture in Field Environments, Angers, France, August 2022.
  • Type: Conference Papers and Presentations Status: Accepted Year Published: 2022 Citation: A. You, C. Grimm, and J.R. Davidson, Optical flow-based branch segmentation for complex orchard environments, IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS), Kyoto, Japan, October 2022.


Progress 07/15/20 to 07/14/21

Outputs
Target Audience:The target audience for this reporting period was the robotics research community. During the prior year, we focused on disseminating our research results through technical workshops and academic publications. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?For this performance period, this project provided research experiences for four graduate students. How have the results been disseminated to communities of interest?The results have been disseminated through one journal submission and one workshop submission. Findings of the project were also discussed in number of online seminars and conferences organized by commodity groups (e.g., Washington Tree Fruit Research Commission) and professional organizations (e.g., International Conference on Robotics and Automation). What do you plan to do during the next reporting period to accomplish the goals?Our primary focus for the near term is incorporating force feedback into the pruning approach phase. The goal is to combine visual feedback and force feedback into a hybrid controller that uses visual feedback to initially place the cutter in the vicinity of the branch before handing off control to a force-based controller which handles the close-up positioning of the cutters around the branch. We may also explore reinforcement learning-based methods for learning the optimal way to maneuver the cutter towards the branch. We also believe that there is room to improve the skeletonization process via more sophisticated graph-based learning algorithms. Currently, our skeletonization algorithm is specific to sweet cherry trees and relies on priors and manually-tuned parameters. Graph-based algorithms should be capable of learning more robust rules about topology and geometry and could be applied to any set of trees with a fairly well-defined structure, not just UFO cherry trees. We will also focus on improving the tracking-camera based 3D reconstruction of apple and cherry trees using global and local cameras. In addition, we will focus on developing pruning rules for automated selection of pruning points for the robot to reach and prune out branches. Geometric and horticultural features (e.g., branch diameter and its relationship with number of fruit that could be grown per unit cross-sectional area) will be used in developing a set of pruning rules specific to one cherry tree architecture (Upright Fruiting Offshoot) and one apple tree architecture (formal fruiting wall).

Impacts
What was accomplished under these goals? During the first year of this project, we focused our efforts on Objective 3: Create an integrated perception and manipulation framework for fruit tree pruning. WSU has developed a camera system for constructing an accurate 3D model of trees during the dormant season. The system consists of a depth camera and a tracking camera, both integrated to the end-effector of a UR5e manipulator (Industrial Robots). The depth camera acquires point clouds from a target tree. During each capture, the tracking camera estimates the rotation and translation of a coordinate reference system (CRS) fixed to the end-effector to a CRS that is fixed to the world. By using these two cameras, all point clouds captured by the system can be transformed to a common CRS. When imaging a tree, the manipulator follows a predefined path in front of the tree that enables the system to 'see' the tree structure from different perspectives. A section of the tree structure might be occluded by a branch that extends in front, but from another perspective this section may be visible. Also, from a closer distance, a finer point cloud will be captured. As the end-effector travels along the waypoints, it continuously captures point clouds that are transformed and merged to a base point cloud. The transformation is not perfect, meaning that the incoming point cloud and the base one will not be perfectly aligned. To overcome this, the Iterative Closest Point (ICP) registration algorithm is used recursively to align the incoming point cloud to the base point cloud. Because of the acquisition process whereby the 3D model of the tree is composed of partial point clouds with a certain amount of overlap, a slight misalignment in one of the point clouds will lead to a greater positioning error for the next incoming one. Another issue that we are addressing is that the ICP registration fails more frequently for increasingly sparse point clouds. In upright fruiting offshoot (UFO) cherry trees, which is the orchard system we considered during year one, the target side branches are thin, reducing the performance of the algorithm. As a solution to both problems, we are evaluating the use of another depth camera that captures a base point cloud of the entire tree from a greater distance. Then, we plan to use the camera system described above to reinforce the model by obtaining the details of initially occluded sections from closer range. OSU has developed an algorithm for creating a tree model called a skeleton, i.e., a collection of line segments which describe the overarching topology of the tree, from the point cloud acquired by the perception system. Each segment on the resulting skeleton is also assigned a label corresponding to the part of the tree. For UFO cherry trees, this can be one of four classifications: Trunk, support, leader, and side branch. The modeling process is set up as an optimization problem where the goal is to create the highest-scoring tree. Positive score is awarded for including segments which are highly likely to correspond to real connections, while penalties are applied for violating known priors, such as leader branches pointing straight up, etc. More recently, we have started work on integrating force feedback into the approach phase of the system. Previously, when the cutting end-effector approached the target cut point, we used feedback from the camera to update the position estimate of the cut point. However, we paid no attention to the amount of force that was being applied to the environment by the robot. In addition to being able to avoid large contact forces, we believe that force feedback will also enable the robot to detect whether a cut is currently succeeding and then adjust the trajectory of the cutter accordingly to guide the branch into the optimal cutting zone. Preliminary studies have shown success in being able to use force readings alone to classify the state of execution via a Long Short-Term Memory (LSTM)-based neural network.

Publications

  • Type: Other Status: Other Year Published: 2021 Citation: A. You, N. Parayil, C. Grimm, and J.R. Davidson, Execution monitoring for robotic pruning using force feedback in IEEE International Conference on Robotics and Automation (ICRA): Workshop on Task-Informed Grasping: Agri-Food Manipulation, 2021, virtual.
  • Type: Journal Articles Status: Under Review Year Published: 2021 Citation: A. You, C. Grimm, A. Silwal, and J.R. Davidson, Semantics-guided skeletonization of sweet cherry trees for robotic pruning, Computers and Electronics in Agriculture, 2021.