Source: Carnegie Mellon University submitted to NRP
COLLABORATIVE RESEARCH: NRI: BALANCE PRUNING OF DORMANT GRAPEVINES WITH AUTONOMOUS ROBOTS
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
ACTIVE
Funding Source
Reporting Frequency
Annual
Accession No.
1027626
Grant No.
2021-67021-35974
Cumulative Award Amt.
$1,000,000.00
Proposal No.
2021-10935
Multistate No.
(N/A)
Project Start Date
Sep 1, 2021
Project End Date
Aug 31, 2025
Grant Year
2021
Program Code
[A7301]- National Robotics Initiative
Recipient Organization
Carnegie Mellon University
5000 Forbes Avenue
Pittsburgh,PA 15213-3815
Performing Department
Robotics Institute
Non Technical Summary
Our long-term goal is to develop a commercially viable fully autonomous pruning system to reduce dependency on seasonal semi-skilled workers while excelling productivity. The overall objective is to investigate the state-of-the-art in robotics technology to significantly improve and stabilize the balance between vegetative and reproductive growth that would yield better fruit quality and predictable crop load. Our approach deviates significantly from the established paradigm in robotic grapevine pruning in two major ways. Firstly, it is recognized that a grapevine training system that facilitates robotic technology in vineyards is the key to the successful implementation of autonomous and selective pruning of vines. Second, the design of the proposed robot is general-purpose and multi-functional that makes it compatible with different varieties and canopy architectures and is more novel compared to existing systems. Furthermore, the concept of balanced pruning (balancing plant vegetative and reproductive growth) is common among most woody perennial cropping systems (apples, cherries, and other tree fruits, and nut trees). The technology and concepts developed here for juice and wine grapes would translate to other systems as well.A first iteration of the prototype robot that embodies a simpler and commonly practiced spur pruning has already been built and recently evaluated in a commercial vineyard. This early stage system evaluation played a crucial role in understanding the practical requirements in the field. The objectives in this proposal are significant improvements to the existing system and are based on the \textbf{lessons learned} from using the prototype in real field deployments. We also use diverse vanguard learning methodologies in synergy with classical approaches to prune real vines in commercial fields. Thus, we can overcome their individual limitations and push research in the right direction and leverages the benefits of both approaches. We believe that continuity of this research could lead to a practical and economical solution for automated pruning within a reasonable time frame. The adoption of this technology will have significant impacts in the U.S. grape industry both in the mid and long terms.Intellectual Merits: Robot systems to selectively prune grape vines do not exist while the industry has clear needs for it in today's economy. Pruning a vine without any modification and in its natural form poses multiple interesting challenges that requires advanced research in multiple branches of robotics including perception, manipulation, and AI. In this proposal we investigate fundamental research advances in robotics that will have broader impact, ranging from automation in more general tree canopies to a range of everyday tasks that require intelligent interaction with flexible materials in cluttered spaces. From perception perspective, we propose illumination-invariant imaging capabilities to generate reliable and consistent pixel information in the outdoor environment. Dormant season vines contain dense, criss-crossing branches that effectively fill a 3D volume while also leaving many small unoccupied spaces. The resulting highly occluded complex geometry is difficult to model, and existing modeling methods such as SLAM are not capable of generating complete maps. Our approach under the perception goal addresses this complex problem with novel approach that systematically and optimally identify region of interest(s) and recover missing information to complete vine models. Similarly, deciding where to make a pruning cut requires intelligence to understand the canopy at multiple levels, including its geometry, its topology (what is connected to what), and its semantic meaning (what parts are canes, buds, etc.). The ability to automatically generate this level of understanding does not currently exist. Thus, from manipulation standpoint, we are pushing research boundaries to operate robot arms in cluttered and full of flexible objects and AI modules that learns to avoid or push away objects in order to reach deeper into the canopy. Currently existing standard manipulation planning approaches are not equipped to handle these cases.Broader Impacts: (i) This research while pushing the current bounds of robotics and AI research, also has a real potential to deliver more productive and sustainable agriculture, especially in a scenario of farming labor shortage and climate change. (ii) The approaches presented here would increase the economic competitiveness of the U.S. Grape industry and establish a partnership between academia, industry, and stakeholders. (iii) The research team will train underrepresented groups, a graduate student, and expose local high school students through established programs at CMU.
Animal Health Component
25%
Research Effort Categories
Basic
75%
Applied
25%
Developmental
(N/A)
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
20511392020100%
Knowledge Area
205 - Plant Management Systems;

Subject Of Investigation
1139 - Grapes, general/other;

Field Of Science
2020 - Engineering;
Goals / Objectives
The overall goal of this project is to develop and evaluate an autonomous robotic platform to execute dormant season grapevine pruning in the service of promoting vine balance.There are three technical objectives and one viticultural objectiveResearch Goal 1: Active perception for vine modelingHaving comprehensive 3D information of plants is a basic requirement for any robotic system that interacts with them. This goal is to use robotic perception systems to create complete model of a dormant grapevine, i.e. a model without significant gaps or missing functional relationship between parts of the vine. The research will overcome shortcomings present in current state-of-art mapping and modeling systems when applied in agricultural settings. The primary such shortcoming the large model gaps that result from the significant level of occlusion in grapevines. To find a solution to this problem, we pose two questions: 1) where are the discontinuities in the input pointcloud and 2) how to fill-in the missing information to complete the model.Research Goal 2: Robust vine vigor measurementIt is well known that there is a strong correlation between cane weight and length. This information could be good enough itself to estimate vine size/ pruning weight by using simple linear regression models. However, such process would require multiple intermediate and heuristic-based approach which would be harder to generalize in complex structures. Here, for end-to-end solution we leverage on the robustness of deep networks to estimate vine size (i.e., pruning weight) directly from the pointcloud.Research Goal 3: Learning-based manipulationThe task of pruning dormant trees using a robotic manipulator can be interpreted as a sequence comprising four steps: i) scout the canopy to locate the desired pruning points, ii) find a path that (without damaging the tree) places the cutting tool in a region with a direct line of sight (i.e., free of obstacles) to the desired point, iii) move towards the cutting point and iv) make the cut. This goal is to develop necessary fundamental and systems advances necessary to implement these four steps autonomously. This goal is broken into sub-goals that include data driven approaches to learn pruning rules, and reinforcement learning driven policies to safely reach a manipulator into desired cut points within a vine canopy.Research Goal 4: Further design and develop grapevine training systems for robotic applicationsThis is to deploy, validate, and understand the impact of using the automated pruning system that results from Research Goals 1-3 in a viticultural setting. There are two subgoals: to test the robot in three commonly used vine architectures; and to understand the best way to use an autonomous pruning robot in combination with currently existing mechanized pruning machines.
Project Methods
The overall approach to the program is to develop fundamental advances necessary to achieve research goals 1-3, then integrate the results into a fieldable robotic system for research goal 4. A description of the methods for each of the research goals follows.Research Goal 1: Active perception for vine modeling:To deal with point cloud discontinuities in the model, we start by voxelizing the incoming (incomplete) pointcloud from the global stereo cameras using methods such as octree data structure. At high level, the next logic here is then to track the topological changes in the graph as a function of the voxel size. As the voxel size iteratively decreases, strongly connected components start to form clusters as smaller voxels are unable to maintain connection to the neighbouring points and to the initially connected larger voxels. The next step in the pipeline is then to simply keep track of all new disjoints over all iterations. Thus, at the end of the cycle the pipeline outputs a list of missing edges in the graph that can be traced to its actual 3D location(s) in the digitized model. A custom built in-hand camera attached to the robot arm positions itself optimally in the vicinity of the canopy indicated by the algorithm.Once discontinuities have been identified and the in-hand camera has been positioned, we will find ways to connect the disconnected regions of the model using the in-hand camera pointcloud and fusing it back to the input pointcloud. Traditional approaches for registering pointclouds such as Iterative Point Cloud (ICP) by could also work here. However, it requires very accurate initialization and usually do not converge well for sparse or noisy inputs. For robust pointcloud infill without any initialization and extrinsic calibration of eye-in-hand camera, we propose a deep learning-based approach. Motivated MaskNet, we present "Dual MaskNet" that learns an additional binary vector which is the difference between the inliers and the input in-hand pointcloud that essentially learns to capture missing links.Research Goal 2: Robust vine vigor measurementWe approach the problem of translating vine pointcoulds to a vigor measurement by building upon new and more generalized form of deep neural network called Graph Neural Network (GNN) that has received much attention in recent history. This type of network can directly take graphs as input and learn complex patterns from them. Here, the input to the GNN network would be a graph of segmented cane pointclouds (the nodes are the cane segments, the edges are the physical connections between them).As multiple layer Graph Convolution layers extract high dimensional features, the end of the network features fully connected layers for regression that outputs the vine size. Under this objective, we will further advance this approach by investigating into the latest development in GNN with attention networks for various tasks. We hypothesize that the graph attention network could dynamically adjust its kernel to adapt to the structure of vines. Recent work has shown similar capabilities in pointcloud segmentation with GNNs. Such features could allow us to process pointclouds without pre-processing for more robust end-to-end capabilities.Research Goal 3: Learning-based manipulationResearch goal 3 has two objectives:Learning the pruning rule, andreaching the desired pruning point.Tolearn the pruning rule, we will explore methods to estimate the best set of cut points based on the vine sensing and modeling described in earlier sections. The selection of the pruning points is crucial to maintain the balance of the vines though all seasons and has to be carefully planned. This planning involves all the decision process such as tree structure, its geometric characteristics, the localization of the buds, and the vine vigor measurements in the selection of points pruning points.In this project we will study two different data-driven approaches in an effort to mimic the decision making of expert human pruners: supervised and self-supervised learning. We will first use a binary supervised learning method, which fits well to address this problem. The training dataset can be composed of the tree model (possibly including its geometric features) and the 3D location of all the buds. Each bud will be labeled as cut or no cut. These labels will be provided by human experts, who will mark the buds selected for pruning in real vines. We can subsequently identify those points in their virtual counterparts.Toreaching the pruning points,we will explore the trade-offs between classic approaches and research in reinforcement learning (RL). Here, we use the term "classical approach" to refer traditional trajectory and path planners such as rapid exploring random trees or optimization-based. Once the tool is in direct line of sight, driving it to the cutting point using the classical approach is a pragmatic solution. However, the step of moving the tool from the home position to the place of direct line of sight requires something more advanced to deal with the presence of complex occlusions. Here we will design an approach mainly based on RL, as it naturally fits in the exploration-exploitation behavior required for globally or locally scouting the tree and locating new potential pruning points. We plan to maintain the research direction in on-policy methods such as Proximal Policy Optimization (PPO) as it provides a theoretical learning framework to improve over known issues in other methods such as sample efficiency, implementation complexity or hyper-parameter tuning.Research Goal 4: Further design and develop grapevine training systems for robotic applicationsIn this research goal, we will integrate the above capabilities into a fieldable demonstration platform for autonomous grapevine pruning that includes a mobile base, necessary perception systems, and a robotic arm equipped with a vine cutting tool. It will deployed at the Cornell Lake Erie Research and Extension Laboratory (CLEREL) in a variety of viticultural settings to validate performance and investigate the potential impact of using this technology in commercial production systems. All of the varieties, training systems, mechanization field comparisons, and vineyard equipment have already been established at the Cornell Lake Erie Research and Extension Laboratory (CLEREL). Dr. Bates and his team have extensive experience in vineyard mechanization and precision viticulture management.The pruning robot will be tested in V. vinifera, V. labrusca, and inter-specific hybrid research vineyard plots. 'White Riesling' will be low-wire cordon trained with vertical shoot positioning, 'Concord' will be high-wire trained with sprawling canes, and 'Vignoles' will be high-wire cordons trained as a semi-sprawl system. For each variety, the robot will be used in an unpruned system to test the ability of the robot to complete 100% of the pruning job and in a mechanically pre-pruned system to test the ability of the robot to refine fruiting bud quantity and quality. These two treatments (robotic, mechanical + robotic) will be compared to the control of 100% manual balanced pruning. Observations will be collected on vine vegetative growth (pruning weight and retained nodes), reproductive growth (yield and yield components), and fruit quality (juice soluble solids, titratable acidity, pH, and color).To explore the relationship between trellis architecture and automation efficiency, a traditional grape trellis with wood posts at every third vine will be compared to a vineyard with a metal post at each vine. The wood trellis system is less expensive to install but will sag over time and creates challenges for machine operation. The metal trellis design costs more to install but maintains uniformity and could potentially improve efficiency. A side by side comparison will be conducted with the pruning robot at CLEREL.

Progress 09/01/23 to 08/31/24

Outputs
Target Audience:The audiences included growers, industry, entrepreneurs, researchers, faculties, students, visiting scholars, guest lecturers, and the public. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?The second graduate student hired in this project is focusing his Master's thesis on computer vision and robot learning algorithms to learn to prune vines using human demonstrations in collaboration with another graduate intern. Co-PI Silwal included the robot design and AI concepts as a case study in one of the lectures for the course 16765 Robotics and AI in Agriculture graduate level course How have the results been disseminated to communities of interest?Co-PI Silwal presented the recent progress in several national and international level workshops and conferences, and class lectures. Papers: Qureshi, M. N., Garg, S., Yandun, F., Held, D., Kantor, G., & Silwal, A. (2024). Splatsim: Zero-shot sim2real transfer of rgb manipulation policies using gaussian splatting. arXiv preprint arXiv:2409.10161. Schneider, E., Jayanth, S., Silwal, A., & Kantor, G. (2023, October). 3D Skeletonization of Complex Grapevines for Robotic Pruning. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3278-3283). IEEE. Conference: Splatsim: Zero-shot sim2real transfer of rgb manipulation policies using gaussian splatting. CORL Workshop on Mastering Robot Manipulation in a World of Abundant Data. November 9, 2024 Invited Speaker: Robotics and AI for Ag. School of Plant and Environmental Sciences, Virginia Tech. September 5th 2024 Agricultural Robotics: From efficient data generation for robot learning to sim2real gap when applying robotics in agriculture. Robotics seminar series at West Virginia University.October 4th 2024. Autonomy in Agriculture. School of Computer Science, Carnegie Mellon University. Raj Reddy Symposium. November 8th 2024 Robotics and AI for Ag. Robotics Institute summer scholars (RISS). Carnegie Mellon University. July 24th 2024 What do you plan to do during the next reporting period to accomplish the goals?We plan to accomplish the following during the next reporting period Objective 1: completed Objective 2: Completed Objective 3: Deploy our currently trained diffusion models in real field conditions. Objective 4: We will continue to maintain the different pruning styles in both high wire (variety Concord) and low wire (variety White Riesling) cordon trained grapevines for the project. We also continue to maintain manually pruned and mechanically pre-pruned Concord vines to assist with robot training. ?

Impacts
What was accomplished under these goals? In our previous annual report, we highlighted the successful completion of research goals 1 and 2. Specifically, we designed and implemented active perception algorithms to position cameras for identifying and capturing gaps in vine models. This enabled the construction of highly detailed 3D models of complex dormant-season vines. Additionally, we collected, curated, and shared data while designing baseline algorithms to predict vine pruning weight using both point clouds and images. Following these achievements, our focus shifted primarily to research goal 3: developing a learning-based approach for pruning grapevines. Recently, we explored diffusion-based policies, a state-of-the-art method in imitation learning. To support this effort, we equipped a robotic system with both in-hand and global cameras, as well as a teleoperation system, to gather human demonstrations of vine pruning. However, our data collection was constrained by limited access to commercial vineyards towards the end of the dormant season, adverse weather conditions, changing lighting, and teleoperation fatigue. These challenges prompted us to pursue a novel direction: leveraging the latest Gaussian splatting-based methods to develop a hyper-realistic simulation model. This approach allows us to generate extensive data in a controlled simulation environment, overcoming real-world constraints. It also paves the way for addressing the sim-to-real gap by utilizing the realistic rendering capabilities of Gaussian splatting.

Publications

  • Type: Other Status: Submitted Year Published: 2024 Citation: Qureshi, M. N., Garg, S., Yandun, F., Held, D., Kantor, G., & Silwal, A. (2024). Splatsim: Zero-shot sim2real transfer of rgb manipulation policies using gaussian splatting. arXiv preprint arXiv:2409.10161.
  • Type: Conference Papers and Presentations Status: Submitted Year Published: 2023 Citation: Schneider, E., Jayanth, S., Silwal, A., & Kantor, G. (2023, October). 3D Skeletonization of Complex Grapevines for Robotic Pruning. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3278-3283). IEEE.
  • Type: Conference Papers and Presentations Status: Published Year Published: 2024 Citation: Splatsim: Zero-shot sim2real transfer of rgb manipulation policies using gaussian splatting. CORL Workshop on Mastering Robot Manipulation in a World of Abundant Data. November 9, 2024
  • Type: Conference Papers and Presentations Status: Published Year Published: 2024 Citation: Robotics and AI for Ag. School of Plant and Environmental Sciences, Virginia Tech. September 5th 2024
  • Type: Conference Papers and Presentations Status: Published Year Published: 2024 Citation: Agricultural Robotics: From efficient data generation for robot learning to sim2real gap when applying robotics in agriculture. Robotics seminar series at West Virginia University.October 4th 2024.
  • Type: Conference Papers and Presentations Status: Published Year Published: 2024 Citation: Autonomy in Agriculture. School of Computer Science, Carnegie Mellon University. Raj Reddy Symposium. November 8th 2024
  • Type: Conference Papers and Presentations Status: Published Year Published: 2024 Citation: Robotics and AI for Ag. Robotics Institute summer scholars (RISS). Carnegie Mellon University. July 24th 2024


Progress 09/01/22 to 08/31/23

Outputs
Target Audience:The audiences included growers, industry, entrepreneurs, researchers, faculties, students, visiting scholars, guest lecturers, and the public. ? Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?One graduate student was hired and recently completed his Master's degree in Robotics, with his thesis focused on computer vision algorithm to generate 3D models of complex vines and pruning weight estimation that are central to this project. One technical staff was also hired to fine-tune design and fabrication of the robot. Co-PI Silwal included the robot design and AI concepts as a case study in one of the lectures for the course 16765 Robotics and AI in Agriculture graduate level course. How have the results been disseminated to communities of interest?Co-PI Silwal presented the recent progress in several national and international level workshops and conferences, and class lectures. Robotics and AI for Agriculture. TrAC seminar, Iowa State University. 24th March 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 5th October 2023 (upcoming) Publication: Schneider, E., Jayanth, S., Silwal, A., & Kantor, G. (2023). 3D Skeletonization of Complex Grapevines for Robotic Pruning. arXiv preprint arXiv:2307.11706. Submitted to IROS 2023 What do you plan to do during the next reporting period to accomplish the goals?We plan to accomplish the following during the next reporting period Objective 1: Using the state-of-the-art Graph Neural Network (GNNs), we plan to use a learning-based approach to objectively quantify colluded/ hidden or incomplete section of the vine 3D model. This information will then be used to actively maneuver the robot arm with in-hand camera to capture and merge new parts to the existing model Objective 2: Completed Objective 3: We intend towards implementing deep reinforcement learning policies (teach by demonstration) to learn from expert pruning dataset that we recently curated. As reward function formulation is a complex issue in deep RL, during the next reporting period, we will mostly focus on the simulation and experiments with various deep RL algorithms to train an RL agent to efficiently find pruning locations in real world complex vines. Objective 4: We will continue to maintain the different pruning styles in both high wire (variety Concord) and low wire (variety White Riesling) cordon trained grapevines for the project. We also continue to maintain manually pruned and mechanically pre-pruned Concord vines to assist with robot training.

Impacts
What was accomplished under these goals? Vine dataset: Datasets are essential to any data driven approach in general. To accomplish objective 1, 2, and 3, which are basically computer vision and deep reinforcement learning algorithms, we collected and curated two different datasets. First, we collected a multi-view stereo dataset using our existing robot platform and robust outdoor cameras to generate detailed 3D models of complex vines. The data captured consists of stereo images from side and down-facing camera pairs along a linear slider. In total 144 scans were taken of Concord vines, along with pruning weight. A single scan consists of images from the two stereo pairs captured at seven positions along the linear slider. Concord vines were chosen as the most complex vines at the test site, these methods have not yet been tested on other varieties. Due to the high effort of annotating segmentation with many thin features, 91 images were labeled pixel-wise using polygons, broken into the classes (background, cane cordon, post, leaf, sign). The images, class annotations, and pruning statistics are available and publicly shared (see product section for details). Our progress so far shows that we can use 3d models and volumetrically estimate pruning weight is a fast, accurate, and non-destructive way. We achieved correlation between ground truth and our method as high as 0.7 which was significantly higher compared to recent work in this area. Secondly, to train a deep RL model to learn pruning rules from expert prunes, we collected before and after scans of the vines in commercial fields. In this data collection effort, a professional pruner was asked to prune vines following the industry standard pruning rules. A multi-view approach as in the first dataset was implemented to collect before and after scans of the vines. We currently have 60 scans of vines and working on curating the dataset and will share it in the future

Publications

  • Type: Conference Papers and Presentations Status: Accepted Year Published: 2023 Citation: Robotics and AI for Agriculture. TrAC seminar, Iowa State University. 24th March 2023
  • Type: Conference Papers and Presentations Status: Accepted Year Published: 2023 Citation: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 5th October 2023 (upcoming)
  • Type: Journal Articles Status: Submitted Year Published: 2023 Citation: Schneider, E., Jayanth, S., Silwal, A., & Kantor, G. (2023). 3D Skeletonization of Complex Grapevines for Robotic Pruning. arXiv preprint arXiv:2307.11706. Submitted to IROS 2023


Progress 09/01/21 to 08/31/22

Outputs
Target Audience:Research scientist: in ag-robotics, viticulture, and agriculture in general Students: Graduates, undergraduates or high school students from different backgrounds including computer science, engineering, agriculture etc. Other: Stakeholder, growers, and the public. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?In-preparation and during the field trip, three graduate students received hands on robotics training to design, program and deploy robotics system in commercial agricultural system. The graduate student and staff scientist were also provided with opportunity to interact with ag domain experts to learn and gain firsthand experience in production agriculture. PI Kantor and Co-PI silwal included the robot design and AI concepts explored in this research as a case study in one of the lectures for the course 16765 Robotics and AI in Agriculture. One graduate student received an opportunity to present a poster in the Fourth International Workshop on Machine Learning for Cyber-Agricultural Systems (MLCAS 2022). How have the results been disseminated to communities of interest?A part of the dataset was curated with high resolution stereo image pairs, hand labeled pixel level segmentation masks, and stereo parameters were publicly shared at: https://labs.ri.cmu.edu/aiira/resources/ during the MLCAS conference mentioned above. What do you plan to do during the next reporting period to accomplish the goals?1. Algorithm for point cloud in-fill using in-hand camera to improve point cloud models of complex vines. 2. Better estimation of pruning weight using 3D vine models. 3. Reinforcement learning models to learn pruning rules from human demonstrations. 4. Publicly share all dataset once curated and organized.

Impacts
What was accomplished under these goals? We recently accomplished our first scheduled data collection trip to a commercial vineyard located at Cornell Lake Erie Research and Extension Lab (project collaborators), NY. The data collection trip mostly focused on collecting high resolution stereo images of dormant season vines with the proposed robot system described in the proposal. The robot system consisted of a dual-stereo cameras (positioned in a top and bottom configuration) and an in-hand stereo camera attached to a seven degree of freedom robot arm. The robot system scanned the grapevines with the dual stereo cameras from seven different views and 25 pre-programmed viewpoints with the in-hand camera. In total, two different vines varieties viz. Concord and Riesling were imaged with a total of 80 Riesling scans, and 60 Concord scans. We also collected 60 samples of pruning weight from a row of Concord vines. Additionally, professional pruners were asked to hand prune the scanned vines and a matching pre-post-pruning dataset was also collected. Along with the images, other dataset consisted of RTK-GPS locations of each vine, each joint location of the robot arm, and stereo calibration parameters. All data were recorded as ROS bags for ease of integration into our existing algorithms. The following are the dataset details: 1. Global camera scans: In total, 14 different stereo view with the high-resolution dual stereo on 60 concord vines and 80 Riesling vines. 2. In-hand camera scans: In total, 25 different view of 20 Riesling and 5 Concord vines 3. Pruning weights: In total, 60 digital scans and the associated pruning weights of Concord vines, 80 scans and pruning weight of Riesling vines. 4. Pre-post pruning scans: 80 matched pre-post pruning scans of Riesling vines. Research Goal 1: This goal here is to use robotic in-hand perception systems to create complete model of a dormant grapevine. The data collected with the in-hand camera is being analyzed to the find the most optimal views from the collected viewpoints. The point clouds from the stereo will then be used to complete vine models. Research Goal 2: Robust vine vigor measurement. The goal here is to estimate pruning weight by using cane lengths as proxies. Current results are showing decent correlation between cane length and pruning weight with R-square of 0.5. Research Goal 3: Learning-based manipulation. The goal here is to learn pruning rule using human demonstrations that require expert data to train the Machine Learning (ML) model. The matched pre-post pruning data is the first starting point for this goal, where the overlap and different between the scans of pre and post pruned vines will autogenerate expert pruning location as data for training the ML model. Research Goal 4: Further design and develop grapevine training systems for robotic applications. TBD

Publications