Source: UNIV OF MINNESOTA submitted to
SURVEYING AND SERVOING AS CANONICAL TASKS TO ENABLE FUTURE FARMS WITH COMMERCIAL OFF-THE-SHELF ROBOTS
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
TERMINATED
Funding Source
Reporting Frequency
Annual
Accession No.
1007019
Grant No.
2016-67021-24534
Project No.
MIN-98-G02
Proposal No.
2015-08799
Multistate No.
(N/A)
Program Code
A7301
Project Start Date
Dec 1, 2015
Project End Date
Nov 30, 2019
Grant Year
2016
Project Director
Isler, I.
Recipient Organization
UNIV OF MINNESOTA
(N/A)
ST PAUL,MN 55108
Performing Department
Computer Science/Engineering
Non Technical Summary
Specialty crop growers rely on manual labor for fruit-picking, inspection, data collection for precision agriculture and similar labor-intensive tasks. It has been difficult to automate these tasks because specialty farms are much less structured environments when compared to commodity farms. However, there is a great need for automating specialty crop tasks because seasonal workers are costly and in short supply and collecting data for precision agriculture methods is difficult. Robotic systems capable of performing specialty crop tasks are becoming commercially available, more robust and affordable. What is missing are planning algorithms required for these robots to autonomously operate in complex environments such as apple orchards. The ultimate goal of the proposed work is to develop algorithms so that Commercial Off-The-Shelf (COTS) robot systems can be used in automation tasks involving specialty crops.The project will focus on two sets of problems: (1) Surveying problems require planning the trajectory of a sensor to collect information about objects of interest. Such sensor planning problems in three-dimensional, complex environments are provably hard. The proposed work introduces new sensor planning problems in environments which are both general enough to capture the complexity of objects encountered on small farms (e.g., trees), and sufficiently constrained so that their visibility properties can be exploited to design efficient algorithms. (2) For end-effector placement, we introduce new visual servoing path planning techniques utilizing multiple arms for operation in cluttered environments with obstacles, such as the branches of a fruit tree.These algorithms will be implemented and tested in the context of two field studies involving quality inspection with near infra-red (NIR) sensors mounted on a COTS robotic manipulator and surveying to collect harvest related data. These studies, combined with the surveying and servoing algorithms, open possibilities for developing new metrics, tools and techniques for agricultural sciences by enabling data collection in scales not possible for humans.
Animal Health Component
0%
Research Effort Categories
Basic
50%
Applied
30%
Developmental
20%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
4021110208090%
2051110106010%
Goals / Objectives
The goal of this project is to develop planning algorithms for robots to autonomously operate in complex environments such as apple orchards so that Commercial Off-The-Shelf (COTS) robot systems can be used in automation tasks involving specialty crops.The project will focus on two sets of problems: (1) Surveying problems require planning the trajectory of a sensor to collect information about objects of interest. Such sensor planning problems in three-dimensional, complex environments are provably hard. The proposed work introduces new sensor planning problems in environments which are both general enough to capture the complexity of objects encountered on small farms (e.g., trees), and sufficiently constrained so that their visibility properties can be exploited to design efficient algorithms. (2) For end-effector placement, we introduce new visual servoing path planning techniques utilizing multiple arms for operation in cluttered environments with obstacles, such as the branches of a fruit tree.
Project Methods
design and analysis of geometric algorithms for view planningcomputer vision methods fruit detection and countingvision based control mechanisms for servoing

Progress 12/01/15 to 11/30/19

Outputs
Target Audience:Our main focus this final NCE year was to finish publications and get datasets out. These will be detailed in the products section. This year University of Minnesota launched a new robotics institute in a new facility. We had numerous visitors from all segments of society who learned about the project. In addition to the associated conference presentations, Isler gave multiple talks at University of Pennsylvania, University of Maryland about this project. Co-PI Hu gave the following presentations: "Robotics for Lot Size of One" - Southeast Regional Fruit and Vegetable Conference, Savannah, GA (January 12, 2019) "Robotics 2.0 for Food Manufacturing" - Transformational Food Manufacturing Workshop, Atlanta, GA (September 25, 2019) PhD student Pravakar Roy presented at UMN's terrestial invasive species center and MS student Joshua Anderson presented at UFL. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?Overall the project had a tremendous positive impact on training. Pravakar Roy (UMN, CS) who was fully funded from this project obtained his PhD and is now at Microsoft Research Konrad Ahlin (GATech, ME) who was fully funded from this project obtainede his PhD and is now at GTRI Joshua Anderson (UMN, Horticultural Science) completed his MS thesis and is not a PhD student at U. Florida In addition, the following students were partially funded: Wenbo Dong (UMN, CS) is scheduled to finish his PhD this spring and accepted a research position at 3M Nikolaos Stefas (UMN, CS) is scheduled to finish his PhD this spring and is on the job market Patrick Plonski (UMN, CS) has received i-Corps training to commercialize the output of this project and is now the CEO of a UMN spinoff Farm Vision Technologies. He defended his thesis and will be graduating this semester. How have the results been disseminated to communities of interest?In addition to the publications and presentations described above we also released our dataset in a permanent repository and published an associated journal paper. https://conservancy.umn.edu/handle/11299/206575 The dataset has been dowloaded almost 1000 times since it's release in September 2019. The usage statistics are available at: https://conservancy.umn.edu/handle/11299/206575 What do you plan to do during the next reporting period to accomplish the goals?The project has ended.

Impacts
What was accomplished under these goals? For surveying, we focused on counting and individual fruit-size estimation. Our efforts were focused on two aspects: (1) improving our detection/segmentation routines with state of the art deep learning results. These are reported in A comparative study of fruit detection and counting methods for yield mapping in apple orchards N Häni, P Roy, V Isler Journal of Field Robotics (2) mapping trees in particular to merge counts from both sides: Semantic mapping for orchard environments by merging two?sides reconstructions of tree rows W Dong, P Roy, V Isler Journal of Field Robotics 37 (1), 97-121 A handheld NIR spectrometer was used to acquire spectra along with traditional phenotyping of several fruit quality traits. Results showed that two trait spectral models (starch pattern index, and soluble solids concentration) had the ability to discriminate between low and high groups across a dataset of 15 cultivars. The models for other fruit traits examined, firmness, TA, skin traits, etc., either had low predictive abilities, or would not be useful beyond the ability to discriminate between cultivars with extreme trait values. Temperature and outdoor limitations of the F-750 spectrometer were determined to be minimal compared to the importance of collecting more than a single scan per fruit to control for individual fruit heterogeneity. The architecture of apple trees and yield efficiency of apple trees having a common scion variety but varying in rootstock variety was characterized by imaging with a red-green-blue-depth (RGB-D) sensor and by using conventional hand-measures of tree size and measured yield. The relationship between image-derived metrics and hand-measured was highest for tree height R2=0.93, and TCA R2=0.71. With improvements in image processing time, image-derived tree metrics are a feasible replacement to manual measures. The predictive ability of cumulative yield by image based-tree volume was lower than by manual-measured tree volume. Tree volume in general did not improve upon the mixed model containing TCA and tree height when estimating cumulative yield. Growers depend on tree architecture research for decision making for example in tree training schemes, and rootstock-scion performance. (1)Graph-basedcooperativerobotpathplanninginagriculturalenvironments.We developed a novel method of using dual robot arms to cooperatively pick apples in an unstructured orchard environment. Each arm is equipped with an RGB-D (color- plus-depth) camera in an eye-in-hand configuration. The first robot arm, termed the Grasp arm, is positioned relatively close to the tree and designated with picking apples. It uses its camera to locate apples that are within view and also within reach. The second Search arm, is located nearby and is used to detect apples that are hidden from the grasp arm and to plan a clear path for it to those fruits using a method based on rapidly-exploring random trees. Fruit location and clear path information is encoded into a graph representation that is expressive enough to encode memory and lends itself to various decision-making algorithms. It embodies the idea of using clear paths for planning, as opposed to mapping obstacles, hence maintaining a lower-dimensional representation. Computer simulation and experimental results are presented based on a preliminary implementation. (2)Leveraging deep learning and RGB-D cameras for cooperative apple-picking robot arms.We developed a novel method of using two cooperating robot arms to pick apples in a typical unstructured orchard. Each robot arm is equipped with an Intel Realsense D435 RGB-D (color plus depth) camera at its wrist. The first robot arm (termed the "Search arm") is used to detect apples and also to survey a given orchard tree's volumetric space for clear paths. The second, "Grasp arm", is positioned relatively closer to the tree and is designated with approaching fruit with the intent to harvest. A custom-trained deep learning-based object detection algorithm called YOLO (You Only Look Once) is used for finding apples in the cluttered scene. Apple location and clear path information is encoded into a graph and used for planning. The arms are driven by a finite state machine with information provided by the graph. Computer simulation and real world experimental results are reported. Based on our results to date, solutions are proposed to improve robustness, apple localization, and to minimize average time to pick all feasible apples.

Publications

  • Type: Journal Articles Status: Published Year Published: 2020 Citation: MinneApple: A Benchmark Dataset for Apple Detection and Segmentation N Hani, P Roy, V Isler IEEE Robotics and Automation Letters
  • Type: Journal Articles Status: Published Year Published: 2020 Citation: Semantic mapping for orchard environments by merging two-sides reconstructions of tree rows W Dong, P Roy, V Isler Journal of Field Robotics 37 (1), 97-121
  • Type: Journal Articles Status: Published Year Published: 2019 Citation: Vision-based preharvest yield mapping for apple orchards P Roy, A Kislay, PA Plonski, J Luby, V Isler Computers and Electronics in Agriculture 164, 104897
  • Type: Journal Articles Status: Published Year Published: 2019 Citation: A comparative study of fruit detection and counting methods for yield mapping in apple orchards N H�ni, P Roy, V Isler Journal of Field Robotics
  • Type: Journal Articles Status: Published Year Published: 2019 Citation: Vision-based monitoring of orchards with UAVs N Stefas, H Bayram, V Isler Computers and Electronics in Agriculture 163, 104814
  • Type: Conference Papers and Presentations Status: Published Year Published: 2019 Citation: Sarabu, H., Ahlin, K., and Hu, A.?P., 2019, Graph-based cooperative robot path planning in agricultural environments, IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 519525.
  • Type: Conference Papers and Presentations Status: Published Year Published: 2019 Citation: Sarabu, H., Ahlin, K. and Hu, A.P., 2019, Leveraging deep learning and RGB-D cameras for cooperative apple-picking robot arms, ASABE Annual International Meeting.
  • Type: Websites Status: Published Year Published: 2019 Citation: https://conservancy.umn.edu/handle/11299/206575


Progress 12/01/17 to 11/30/18

Outputs
Target Audience:Robotics and Computer Vision Researchers Apple, berry and stone fruit growers Horticultural scientists Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?3 UMN CS students, 1 UMN CS Horticultural Sciences Students, 1 GA Tech Engineering were funded by the project. How have the results been disseminated to communities of interest?In addition to the publications reported above, the following presentations were given: Registering Reconstructions of the Two Sides of Fruit Tree Rows, IEEE International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain. (October 2018) Apple Counting using Convolutional Neural Networks, IEEE International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain. (October 2018) Active View Planning for Counting Apples in Orchards, IEEE International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada. (September 2017) Linear Velocity from Commotion Motion, IEEE International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada. (September 2017) Semantic Reconstruction of Apple Orchards, Agricultural Robotics: Learning from Industry 4.0 and Moving into the Future (Workshop in IROS 2017), Vancouver, Canada. (September 2017) Robotic Surveying of Apple Orchards, 2018 United FreshTec Expo, Chicago, Illinois, US. (June 2018) Semantic Mapping for Orchard Environments, 2018 Midwest Robotics Workshop, Chicago, US. (June 2018) Computer Vision Systems for Yield Estimation and Forecasting, 2018 Minnesota Apple Growers Association Annual Meeting in LaCrosse, Wisconsin, US. (January 2018) Tree morphology for phenotyping from semantics-based mapping in orchard environments, Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping IV, SI19 SPIE Defense + Commercial Sensing. (Submitted) Phenotyping Apple Fruit (Malus x. domestica Borkh.) Using Portable Spectrometers, National Association of Plant Breeder's Annual Meeting. Guelph Ontario Canada. (August 2018) (Recording pending: https://goo.gl/MqevZP) "Robotics for Lot Size of One" - Georgia Tech Mechanical Engineering Department Seminar, Atlanta, GA (April 10, 2018) "Robotics 2.0 for Agricultural Applications" - AgTech Conference of the South, Alpharetta, GA (July 24, 2018) Furthermore we performed the following outreach activities: Tools for Instant Fruit Quality Assessment, Minnesota Apple Growers Association Winter Seminar. La Crosse Wisconsin. (January 2018) (Video: https://goo.gl/8aF4gV) Tools for Instant Fruit Quality Assessment, University of Minnesota Arboretum Board of Trustees Annual Meeting. Chanhassen, Minnesota. (September 2018) Robotics Applications in Ornamental Horticulture, Oct. 17, 2018, Presentation by James Luby robotics demo by Minghan Wei and Kazim Selim Engin to ~100 people at the annual meeting of Floriculture Research Alliance in St. Paul, MN This research project was presented at Georgia Tech's Robotics Demo Day on April 12, 2018 targeting K-12 students in the Atlanta metro area. The project is also being used as case study for Georgia Tech's "Agricultural Robotics" course taught by co-PI Hu, in which a soft gripper is being developed for fruit-picking applications. What do you plan to do during the next reporting period to accomplish the goals?We would like to request a one-year no cost extension which will allow us to collect field data for an additional apple growing season. So far in the project, UMN has completed autonomous UAV navigation and yield mapping components. The yield mapping work performed as part of this project formed the basis of a new UMN startup (Farm Vision Technologies) and is in the process of being commercialized. We have also performed indoor destructive experiments to use Near Infra Red (NIR) measurements to quantify apple sweetness. GTRI has developed a dual-robot arm demonstration platform capable of searching for and approaching apples hanging within a cluttered tree. Indoor lab experiments on an artificial tree (with real apples) have been successful and GTRI will be performing initial field experiments in an apple orchard in Ellijay, GA during the final week of August 2018. The plan for Sept 2018 - Aug 2019 will be to refine and make more robust the demonstration platforms' navigation, sensing and control algorithms. UMN will work on attaching the NIR sensor to a robot arm to perform in-field measurements. GTRI will also add the capability to physically pick fruit off a tree via integrating an off-the-shelf robot gripper onto one of the robot arms. The above will culminate in a more thorough demonstration of in situ field experiments by August 2019. AT UMN a project funded computer science PhD student and a horticulture Master's student are reaching the completion of their programs. A second partially funded PhD student has reached candidacy. We will use the funds for these who will work on NIR-manipulator integration and deployment. We have also been using an old manipulator which we already have for demonstrations. This manipulator does not have sufficient reach for field experiments. We plan on purchasing an appropriate manipulator for final experiments. At GTRI, a project-funded doctoral student who has researched robot path planning methods has completed his studies in Summer 2018 and a Masters student began work on the project in Spring 2018, focusing on image processing and robot experimental implementation. The Masters student will continue work on the project through August 2019 to achieve the remaining project objectives stated above.

Impacts
What was accomplished under these goals? "Registering Reconstructions of the Two Sides of Fruit Tree Rows": We consider the problem of building accurate three dimensional (3D) reconstructions of orchard rows. This problem arises in many applications including yield mapping and measuring traits (e.g. trunk diameters) for phenotyping. We present a novel method that utilizes global features to constrain the solution. For additional information please see https://www.youtube.com/watch?v=6mGMF2gFv4M. "Apple Counting using Convolutional Neural Networks": Estimating accurate and reliable fruit and vegetable counts from images in real-world settings, such as orchards, is a challenging problem that has received significant recent attention. In this work, we formulate fruit counting from images as a multi-class classification problem and solve it by training a Convolutional Neural Network. Our method achieved 96-97% accuracies. For additional details: https://www.youtube.com/watch?v=Le0mb5P-SYc. Active View Planning for Counting Apples in Orchards": Consider an agricultural automation scenario where a robot, equipped with a camera mounted on a manipulator, is charged with counting the number of apples in an orchard. In this work, we focus on the subtask of planning views so as to accurately estimate the number of apples in an apple cluster. We present a method for efficiently enumerating combinatorially distinct world models and computing the most likely model from one or more views. These are incorporated into single and multi-step planners. We evaluate these planners in simulation as well as with experiments on a real robot. A Novel Method for the Extrinsic Calibration of a 2D Laser Rangefinder and a Camera": We present a novel method for extrinsically calibrating a camera and a 2D laser rangefinder whose beams are invisible from the camera image. We show that the point-to-plane constraints from a single observation of a V-shaped calibration pattern composed of two non-coplanar triangles suffice to uniquely constrain the relative pose between two sensors. Next, we present an approach to obtain analytical solutions using point-to-plane constraints from single or multiple observations. ?"Semantic Mapping for Orchard Environments by Merging Two-Sides Reconstructions of Tree Rows": Measuring semantic traits for phenotyping is an essential but labor-intensive activity in horticulture. Our first main contribution is a novel method that utilizes global features and semantic information to obtain an initial solution aligning the two sides. Our mapping approach then refines the 3D model of the entire tree row by integrating semantic information common to both sides and extracted using our novel robust detection and fitting algorithms. Next, we present a vision system to measure semantic traits from the optimized 3D model that is built from the RGB or RGB-D data captured by only a camera. Specifically, we show how canopy volume, trunk diameter, tree height, and fruit count can be automatically obtained in real orchard environments. The experiment results from multiple datasets quantitatively demonstrate the high accuracy and robustness of our method. "A Comparative Study of Fruit Detection and Counting Methods for Yield Mapping in Apple Orchards": We present new methods for apple detection and counting based on recent deep learning approaches and compare them with state-of-the-art results based on classical methods. We evaluate the performances of three fruit detection methods and two fruit counting methods on six datasets. Results indicate that the classical detection approach still outperforms the deep learning based methods in the majority of the datasets. For fruit counting though, the deep learning based approach performs better for all of the datasets. Combining the classical detection method together with the neural network based counting approach, we achieve remarkable yield accuracies ranging from 95.56%-97.83%. NIR spectroscopy of Apple Fruit.Near infrared spectroscopy (NIRS), a well-established scientific tool, can be used to estimate chemical and microstructure characteristics of samples, including apple fruit. Technology has improved such that the devices can be used outside labs. In 2016, spectral data was collected and coupled with traditional phenotypic characterizations of apple fruit. An additional study collected data in 2017 using a reduced set of cultivars and fruit characteristics, but also employed a second, less expensive spectrometer. Preprocessing spectral data prior to model creation is often suggested in the NIRS literature. There are conflicting ideas of the best spectral preprocessing methods; the only consensus is that the method seems to be dependent on the sample types and dataset. We used Partial-Least-Squares-Regression to model data collected in 2016. Several preprocessing methods were tested by comparing the resulting model performances. Our current dataset showed that preprocessing did not confer an advantage in the predictive ability of the models. This analysis will be repeated, if necessary, with data from 2017 collected with both spectrometers. Each trait that was physically characterized from both years will be modeled and both spectrometers will be assessed for their abilities to predict apple fruit post-storage characteristics and sensory evaluations, and to distinguish between models for grouped versus individual cultivars. Theoretical development and experimental validation of a new robot path planning method for obstacle avoidance.Robot path planning is used to determine a suitable (e.g., efficient, collisionless, safe) set of navigation commands to direct an autonomous robot from a starting point to a target location, avoiding obstacles en route. One particularly computationally efficient approach is based on the concept of artificial potential fields (APFs), where the obstacles are treated as radiating a repulsive force field while the target is attractive. The computed path through the maze of obstacles is then based on a vector summation of these competing forces. A deal-breaking issue with traditional APF approaches is that the robot could end up in a "dead zone," where the forces sum to zero, and get stuck. As part of this project, we developed a new APF-based robot path planning method that is guaranteed to always converge and never get stuck. This is achieved by including tangential components in the obstacle force fields (refer to [1] for details, a copy of which is attached). Fielding of a sensor-equipped dual robot arm testbed for surveying and servoing in the context of an apple picking task. This year saw continued development of dual arm robot control for a cooperative apple-picking task (see figure below). The "search" arm has a wide camera attached to its wrist and processes views of the tree branches, leaves, and apples, as well as a fiducial (an AR marker) affixed to the second "grasp" arm. A convolutional neural network-based object detection algorithm is used to detect apples. Note that no classification of the obstacles (branches and leaves) is performed but, rather, only clear paths where the fiducial is visible are detected. The search arm thereby determines (and catalogs) both unobstructed paths to apples and also clear paths that are permissible for the grasp arm to maneuver within. The grasp arm is equipped with an RGB-D camera and situated in the palm of the affixed gripper. It uses image-based visual servoing control to figure out how to guide the gripper to the fruit, doing so without requiring a model of either the robot arm or of the unstructured tree environment. Preliminary in situ tests in a working apple orchard in Ellijay, GA took place in late-September, with promising results achieved for both the image processing method and the visual servoing controller. Expansion of the path prioritization and surveying method will take place in simulation and in the laboratory over the fall and spring 2019.

Publications

  • Type: Conference Papers and Presentations Status: Accepted Year Published: 2018 Citation: P. Roy, W. Dong, and V. Isler, Registering Reconstructions of the Two Sides of Fruit Tree Rows, IEEE International Conference on Intelligent Robots and Systems (IROS), Madrid, 2018. (In Press)
  • Type: Conference Papers and Presentations Status: Accepted Year Published: 2018 Citation: N. H�ni, P. Roy and V. Isler, Apple Counting using Convolutional Neural Networks, IEEE International Conference on Intelligent Robots and Systems (IROS), Madrid, 2018. (In Press)
  • Type: Conference Papers and Presentations Status: Published Year Published: 2017 Citation: P. Roy and V. Isler, Active View Planning for Counting Apples in Orchards, IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 6027-6032, Vancouver, 2017.
  • Type: Conference Papers and Presentations Status: Published Year Published: 2017 Citation: W. Dong, and V. Isler, Linear Velocity from Commotion Motion, IEEE International Conference on Intelligent Robots and Systems (IROS), pp.3467-3472, Vancouver, 2017.
  • Type: Journal Articles Status: Published Year Published: 2018 Citation: W. Dong, and V. Isler, A Novel Method for the Extrinsic Calibration of a 2D Laser Rangefinder and a Camera, IEEE Sensors Journal, 18(10), pp.4200-4211, 2018.
  • Type: Journal Articles Status: Under Review Year Published: 2018 Citation: P. Roy, A. Kislay, P. A. Plonski, J. Luby and V. Isler, Vision-Based Preharvest Yield Mapping for Apple Orchards, International Journal of Computer Vision. (Under Review)
  • Type: Journal Articles Status: Submitted Year Published: 2018 Citation: W. Dong, P. Roy, and V. Isler, Semantic Mapping for Orchard Environments by Merging Two-Sides Reconstructions of Tree Rows, Journal of Field Robotics. (Under Review)
  • Type: Journal Articles Status: Submitted Year Published: 2018 Citation: N. H�ni, P. Roy, and V. Isler, A Comparative Study of Fruit Detection and Counting Methods for Yield Mapping in Apple Orchards, Journal of Field Robotics. (Submitted)
  • Type: Conference Papers and Presentations Status: Published Year Published: 2018 Citation: Ahlin, K., Sadegh, N., and Hu, A.-P., 2018, The secant method: global trajectory planning with variable radius, solid obstacles, Dynamic Systems and Control Conference, Atlanta, GA.
  • Type: Conference Papers and Presentations Status: Published Year Published: 2018 Citation: Ahlin, K., Bazemore, B., Boots, Byron, Burnham, J., Dellaert, F., Dong, J., Hu, A.-P., Joffe, B., McMurray, G., Rains, G., and Sadegh, N., Robotics for Spatially and Temporally Unstructured Agricultural Environments, in Robotics and Mechatronics for Agriculture, Chapter 3, (D. Zhang and B. Wei, Eds.), Boca Raton, FL: CRC Press (2018).
  • Type: Conference Papers and Presentations Status: Published Year Published: 2017 Citation: Ahlin, K., Hu, A.-P., and Sadegh, N., 2017, Apple picking using dual robot arms operating within an unknown tree, ASABE Annual International Meeting, Spokane, WA.


Progress 12/01/16 to 11/30/17

Outputs
Target Audience:We interacted with numerous orchard managers to understand their needs regarding surveying technologies, Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?Multiple Ph.D. students from Horticultural Sciences, Computer Science and Mechanical Engineering partiticpated in the project How have the results been disseminated to communities of interest?Through conference publications and presentations, public seminars and direct interactions with orchard growers What do you plan to do during the next reporting period to accomplish the goals?We would like to start developing a prototype that we can give out to growers for testing

Impacts
What was accomplished under these goals? UAV Navigation: We developed obstacle avoidance behaviors and demonstrated fully automomous UAV navigation in apple orchard rows. The UAV can follow a saw-tooth patter to capture high quality images of the fruit. Yield mapping: we continued our work on yield mapping. Improve our technque to incorporate more accurate estimates of the camera motion and the scene so as to track fruit across images (to avoid double counting). We now have a working prototype which we plan to test in commercial orchards next harvest. Manipulation: We developed a novel method for two robot arms to collaborate so as to accurately place an end-effector on a fruit NIR studies: We investigated the use of low cost NIR sensors so as to assess fruit quality in orchard settings

Publications

  • Type: Conference Papers and Presentations Status: Published Year Published: 2017 Citation: Active View Planning for Counting Apples in Orchards
  • Type: Conference Papers and Presentations Status: Published Year Published: 2017 Citation: Linear Velocity from Commotion Motion
  • Type: Conference Papers and Presentations Status: Submitted Year Published: 2018 Citation: Registering Reconstructions of the Two Sides of Fruit Tree Rows
  • Type: Conference Papers and Presentations Status: Submitted Year Published: 2018 Citation: Fruit Counting from Images using Convolutional Neural Networks


Progress 12/01/15 to 11/30/16

Outputs
Target Audience:We presented the results from this project at the Agricontrols control in Seattle as well as at seminars at the University of Minnesota, Washington State University and Carnegie Mellon University. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?One graduate student at GTRI and two graduate students at UMN worked on the project. How have the results been disseminated to communities of interest?Yes, mainly as publications and seminars as reported earlier. What do you plan to do during the next reporting period to accomplish the goals?We would like to integrate the counting algorithm to the UAV and start working on active vision algorithms so as to improve surveying performance.

Impacts
What was accomplished under these goals? We made progress in three main areas: 1) We developed a UAV system, along with navigation and obstacle avoidance algorithms and demonstrated that it can fly through orchard rows under windy conditions Please see:http://rsn.cs.umn.edu/index.php/Autonomous_Vision-based_UAV_Navigation 2) We now have an accurate apple detection and counting systems using only camera input http://rsn.cs.umn.edu/index.php/Robotic_Yield_Estimation_of_Apple_Orchards 3) The GTRI group started working on a double manipulator system for servoing. They now have a working simulation. These results have been reported at the NRI meeting on 11/29 to the program manager and other USDA NRI Awardees.

Publications

  • Type: Conference Papers and Presentations Status: Published Year Published: 2016 Citation: Visual Servoing in Orchard Settings. N. Haeni and V. Isler. IEEE International Conference on Intelligent. Robots and Systems (IROS), 2016
  • Type: Conference Papers and Presentations Status: Published Year Published: 2016 Citation: Vision Based Apple Counting and Yield Estimation. P. Roy and V. Isler. International Symposium on Experiemental Robotics 2016
  • Type: Conference Papers and Presentations Status: Published Year Published: 2016 Citation: Vision-Based UAV Navigation in Orchards. Nikolaos Stefas, Haluk Bayram, Volkan Isler. Agricontrols 2016
  • Type: Conference Papers and Presentations Status: Published Year Published: 2016 Citation: Semantic Mapping of Orchards. C. Peng, P. Roy, J. Luby and V. Isler. Agricontrols 2016
  • Type: Conference Papers and Presentations Status: Published Year Published: 2016 Citation: Surveying apple orchards with a monocular vision system. Pravakar Roy, Volkan Isler, IEEE International Conference on Automation Science and Engineering 2016