Source: MOAI TECHNOLOGIES LLC submitted to
LOW COST SENSING AND ASSESSMENT OF GRAPEVINE CANOPY DENSITY FOR IMPROVED GRAPE PRODUCTION AND QUALITY
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
TERMINATED
Funding Source
Reporting Frequency
Annual
Accession No.
1006364
Grant No.
2015-33610-23528
Project No.
MINW-2015-00708
Proposal No.
2015-00708
Multistate No.
(N/A)
Program Code
8.13
Project Start Date
Jun 15, 2015
Project End Date
Mar 14, 2016
Grant Year
2015
Project Director
Plocher, T.
Recipient Organization
MOAI TECHNOLOGIES LLC
14300 34TH AVE N 326
PLYMOUTH,MN 55447
Performing Department
(N/A)
Non Technical Summary
This project will develop and test a solution that will provide grapevine canopy density measures using low-cost off-the-shelf LIDAR (Light Detection and Ranging) sensors and intelligent processing. It will be affordable, usable by vineyard workers with minimal training, and have high diagnostic value to the vineyard manager who is responsible for making canopy management decisions and for the quality of the crop. As wine exports continue to increase from low cost countries such as Chile and Argentina, the U.S. wine industry must find ways to stay competitive. This will be even more important in the future as the prospect of a rapidly growing Chinese wine industry producing and exporting high quality wines becomes a reality. One way the wine industry in the U.S. can maintain its place in the market is to leverage technology and automation that both reduces the amount of labor required for grape production and increases the quality of the grape crop leading to higher quality wines. Grapevine canopy management is perhaps the most critical task in the production of uniformly high quality grapes and is directly related to wine quality. Also, good canopy management practices result in less disease, reducing the need for fungicide sprays. Thinning the canopy by means of hedging, topping, and leaf removal can be done by machinery. However, in current practice, the task of evaluating the vine canopy so that thinning can be done at exactly the right time and to the right extent, is manual and labor-intensive. The proposed system will greatly reduce the amount of labor required to assess the canopy density in a large commercial vineyard by largely automating the task with tractor-mounted sensors and automated data analysis. It will allow an entire vineyard canopy to be mapped with just one pass of a tractor and pinpoint especially problematic areas that should be the focus of management tactics. It will also help to reduce the amount of pesticides used in U.S. vineyards, by reducing conditions in the vineyard that foster grape fungus development.
Animal Health Component
0%
Research Effort Categories
Basic
(N/A)
Applied
(N/A)
Developmental
100%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
40211312020100%
Knowledge Area
402 - Engineering Systems and Equipment;

Subject Of Investigation
1131 - Wine grapes;

Field Of Science
2020 - Engineering;
Goals / Objectives
Thisproject addresses the USDA Societal Challenge Area on Global Food Security and Hunger. Proposed is an application of a new technology that has potential to increase grape production and quality through better sensing and management of grapevine leaf area. Better canopy management, resulting from the proposed technology, will reduce the incidence of fungus diseases in grapevines and allow for fewer and better targeted fungicide sprays.The proposed development is responsive to SBIR program solicitation Research Topic 8.13-- Plant Production and Protection - Engineering, subpart 3, Improved Crop Production Methods or Strategies. Also, also is responsive to several of the Special Priority Research Areas for FY2015:1. Improved chemical application technology2. High resolution spatial and temporal monitoring of specialty crops using sensors and sensor networks3. Reduction of manual labor in specialty crop production, harvesting, and post-harvest handling through technology to improve the competitiveness of US specialty crop production.
Project Methods
The objective of this Phase 1 project is to demonstrate that measuring grapevine canopy density using a very low-cost LIDAR device can produce a result that correlates highly with the measurements made by more manual techniques such as point quadrat analysis and visual assessment, as well as with "ground truth" data produced by traditional direct measurement of leaf area. Our choice for a device with which to demonstrate this is the Microsoft Kinect 2. We also will demonstrate the concept of georeferencing our canopy density measures to GPS location data as a first step in constructing a vineyard-level view of the canopy density measurements. In Phase 2, the prototype system would be much more extensively evaluated for performance under a wider range of environmental conditions and with a wider range of grapevine training systems commonly used in the U.S. grape and wine industry. Vineyard-level mapping and visualization of the results to support more precise management decisions would also be further developed and tested with users in Phase 2.

Progress 06/15/15 to 03/14/16

Outputs
Target Audience:Our aim is for the 3D imaging system tobe affordable to small and medium-size grape and wine producers, usable by vineyard workers with minimal training, and have high diagnostic value to the vineyard manager who is responsible for making canopy management decisions to improve the quality of the crop. Note that90% of the wine producers in the US are in this small and medium target audience category. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided? Nothing Reported How have the results been disseminated to communities of interest?Poster presented at the 4th VitiNord Conference, 12 November, 2015, Nebraska City, NE. VitiNord is an international conference for producers of grapes and wine in northern tier states and countries. It was attended by 300 people representing 10 countries and 21 states of the U.S. What do you plan to do during the next reporting period to accomplish the goals? Nothing Reported

Impacts
What was accomplished under these goals? In the past, LIDAR systems have been too costly for on-farm specialty crop applications, particularly among small and medium sized producers. New, more consumer-oriented LIDAR products such as Microsoft Kinect2 make LIDAR worth revisiting as a sensor for low cost vine canopy density assessment. A Kinect2 device costs only $199. If a system for grapevine leaf canopy density measurements could be built around such an inexpensive sensing device, it would be affordable to all vineyards and wineries, regardless of size, and provide an incentive to adopt precision canopy management and spray practices that are so important to grape production and quality. Objective 1. Assembling a prototype for field-testing. For testing the prototype system in the field, a simple wheeled platform was built using a two-wheel construction dolly with specific modifications to support the Kinect sensor, a laptop computer for data acquisition and storage, and battery for in-field power. Objective 2. Field Data Collection in a Vineyard. Sixty vines of the variety Frontenac gris were tested in Minnesota vineyards. In our test procedure, the prototype 3D imaging system was placed at a constant distance and height from each vine. The two-wheel dolly platform carrying the battery and laptop kept the Kinect at a fixed height. The middle trellis wire was used as a reference for maintaining a constant distance of 1m from the sensor to the vine. The image field of view was defined as an area 7.6cm above and below the top and bottom trellis wires, and 60cm on either side of the trunk of each vine. The FOV of each vine was marked with flagging tape so that manual measurements could be performed within the same FOV as the imaging. In addition to the automatic Kinect measurements captured at both vineyards, manual measurements were made using: 1) direct leaf area measurement by Skinkis and Schreiner's method; 2) the point quadrat method of Smart; and 3) visual canopy assessment using the Smart scorecard. The manual methods were an attempt to obtain "ground truth" measurements of canopy gaps and leaf layer number. The field-testing in a real outdoor vineyard environment showed: The Kinect2 sensor was robust to bright sunlight and leaf wetness. Averaging 8 images greatly reduced any problems of leaf movement due to wind. Sufficient image resolution was achieved by placing the camera approximately midway between two rows of vine, about 1 meter in distance from the vine. Even having to move the cart by hand and manually align the camera to the vine, the 3D imaging was 10-20 times faster than making a comparable manual measurements on the canopy using one of the three manual methods. Objective 3. Exploring alternative algorithms for canopy density measurements The general process flow of the image processing canopy assessment system was comprised of 6 steps. Step 1. Grayscale mapping. Raw data points were converted to grayscale levels, representing the distance of the objects from the sensor in the range from 0.7m to 1.7m. Leaves at 0.7m were mapped to a '0' value (white) and leaves at 1.7m were mapped to '255' (black). Step 2. Background removal. The default range of the Kinect2 is set at 3.5m. This is sufficiently deep that, unbounded, it would capture in the image both the vine of interest and vines in the row behind it. In order to remove unwanted background objects such as other vines, objects outside the 0.7-1.7m range were mapped to 0 (white). Step 3. Frame Averaging. In order to eliminate noise and blur in the image, the gray level values (per pixel) of 8 consecutive image frames were averaged and the new image assembled. This averaging was key to producing a smoothed, complete final output image that minimized speckling and other distortive effects from light and movement. Step 4. Binary Thresholding. Grayscale images produced by the Kinect sensor range in intensity values from 0 (black) to 255 (white). This allows 256 unique intensities (shades of gray) to be recorded that correspond to distance from the Kinect sensor. In binary thresholding, the original image was compared to a grayscale threshold value to classify areas of the image as either discernible features (e.g. leaves, cordon, trunk, stems, fruit clusters) or gaps. These gray level values were used in the calculation of the canopy density metrics. Step 5-6. Computing Canopy Density Measures. Total Leaf Area was measured by a technique called "blob" detection. This computer vision technique relies on detecting and identifying regions in an image that have similar properties such as brightness, color or, this case, grayscale level, when compared to the adjacent regions in the image. When the algorithm notices a surface in which grayscale level is approximately constant across adjacent pixels, it associates those pixels into a "leaf blob". Next, our software code applies Object Edge Detection to each detected leaf blob. It compares gray-level pixel values of the leaf "blob" to adjacent 'edge' pixels. This segments the leaf from the rest of the vine. Finally the area within this detected edge can be measured. Total leaf area is computed by summing the computed areas of all the leaves detected. The calculation of Leaf Layer Number (LLN) was approached in two ways. In the first approach, LLN was estimated by simply dividing the total leaf area from the imaging results by the camera field of view (FOV). A second leaf layer number method was developed that directly used the fact that each grayscale level corresponds to 4mm of distance from the sensor into the canopy. We assumed that if the average width or thickness of a single leaf layer for a given variety, then it could be used with the leaf grayscale and distance-from-sensor information to classify leaves into "leaf layer bins" along the gray scale. Applying this, we were able to sort leaves on the test vines into 5 leaf layer bins, using leaf grayscale values, Layer 1=0-58; Layer 2= 59-116; Layer 3= 117-174; Layer 5= 175-232; Layer 5= 233-255. To calculate the Percentage of Gaps in the canopy, we first isolated gaps in the plant by image processing, with gaps defined as those areas where the sensor transmits through the plant into the background, and has a grayscale value of 0 or "white". The code individually indexed each gap and used blob detection to segment them out from the leafy areas of the vine and compute the size of each gap blob. Percentage of Gaps is computed the portion of the total pixel area of the vine. The 3D imaging system showed the best performance in measuring percent gap area in the canopy, with relatively high correlations with observational methods. We believe that the 3D imaging system itself may actually produce a value for gap area that is close to or at ground truth. 3D imaging system performance in estimating LLN was less clear than finding gap area.The highest correlation was between imaging system values and those produced by Smart's subjective observations (r= 0.33); positive, but low. But correlations between the three manual ways of measuring LLN were uniformly zero to low. So, one has to question whether this complex of measures really reflects ground truth. We believe the key factor in measuring LLN are the layering effects caused by the occlusion of a inner layer leaves behind outer layer leaves. Occlusion impedes the ability of the Kinect's IR laser to reach the surface of the occluded leaf. The use of multiple sensors at different angles would help to solve this problem. We also plan further work on the problem of segmenting out these inner layer leaves and classifying them by virtue of their distinct and irregular shapes and grayscale depth into the leaf canopy. Finally, we plan to investigate using the thermal signature of inner layer leaves combined with their depth camera signatures to make a significant improvement in the validity of our LLN measurements.

Publications

  • Type: Other Status: Other Year Published: 2015 Citation: Plocher, T., Hisamoto, C., and Hedstrom, K. Measuring grapevine canopy density using low-cost 3D imaging. Poster presentation at the 2015 VitiNord Conference, 12 November, 2015, Nebraska City, NE


Progress 06/15/15 to 03/14/16

Outputs
Target Audience: Nothing Reported Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided? Nothing Reported How have the results been disseminated to communities of interest? Nothing Reported What do you plan to do during the next reporting period to accomplish the goals?Current Work Experimenting with alternative ways to estimate leaf layer number more accurately: By definition, inner leaves are occluded by outer leaves. Image processing of 3D images might be able to recognize leaves that are partially occluded and classify them as belonging to an inner leaf layer. If the average width of a single leaf layer is known for a variety, training system and site, then it can be used to classify along the gray scale into leaf layer categories. Also looking at some initial thermal images on the hypothesis that inner leaf layer leaves are shaded and are cooler in a thermal image than exterior leaves which are directly exposed to the sun. Work has started on the final report.

Impacts
What was accomplished under these goals? Prototype Development Built up a prototype that can be taken into the vineyard for trials. This consisted of the Kinect2 camera, laptop computer, car battery and power adapter mounted on a mobile cart. Conducted initial tests with potted vines in the laboratory to establish the range, field of view, and number of scans needed for good operation in the field. Developed the image processing application needed to convert raw depth camera data into a form from which we can extract measures of vine canopy density. These processing steps include: Map Kinect2 pixel data to gray scale values. Background objects (>1.7m from camera) are mapped to '255' (white); Objects close to camera (0.513 to 0.7m) are mapped to '0' (black). Objects (leaves and clusters) in between (0.7-1.7m) are gray scaled from '1' to '254'. Reduce noise and blur by averaging and combining images from 8 scans. Assign each pixel to either vine or background (binary thresholding...white or gray or black) Assign a gray scale value from 0-255 to each pixel in field-of-view (FOV), allowing 256 unique intensities of depth. Locate leaves and gaps using "blob" detection, a comparison of multiple similar-gray-level pixels, compared to adjacent 'edge' pixels) Compute percent gaps = total area of gap "blobs"/total area of leaf "blobs" Compute leaf layer number = total leaf area/camera FOV. Field Trials In late summer, we imaged 60 vines of the cultivar Frontenac gris in two different commercial vineyards. Vines had been topped and had one or two rounds of removing lateral shoots. These were highly vigorous vines with large canopy, trained to Vertical Shoot Positioning. The vines were imaged from both sides. We collected measures of % gaps and number of leaf layers using two manual methods: the point quadrat and Smart's observational scorecard. Also measured total leaf area and derived leaf layer number using Schreiner's method. All data collection, both imaging and manual, was completed over a period of 3 days in the first vineyard and in one day in the second vineyard. Findings Robust to bright sunlight, wind, leaf wetness, and background. Good resolution at 1 meter from vine (midway between rows) Highly accurate measures of canopy gaps and canopy width. Measures of leaf layer number are more difficult to obtain; currently, we are comparing 3 different methods for computing leaf layers from images. Measurements using the 3D camera are an order of magnitude faster than manual measurements, even when the camera has to be moved manually from vine to vine.

Publications

  • Type: Conference Papers and Presentations Status: Published Year Published: 2015 Citation: Presented a poster about the project at the 4th Triennial Vitinord conference held in Nebraska City, NE during 11-14 November. The conference was attended by 250 people from 18 states and 11 countries. As a result of participating in the conference, we have identified at least one strong commercial winery field test partner for Phase 2.


Progress 06/15/15 to 02/14/16

Outputs
Target Audience: Nothing Reported Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided? Nothing Reported How have the results been disseminated to communities of interest? Nothing Reported What do you plan to do during the next reporting period to accomplish the goals?Current Work Experimenting with alternative ways to estimate leaf layer number more accurately: By definition, inner leaves are occluded by outer leaves. Image processing of 3D images might be able to recognize leaves that are partially occluded and classify them as belonging to an inner leaf layer. If the average width of a single leaf layer is known for a variety, training system and site, then it can be used to classify along the gray scale into leaf layer categories. Also looking at some initial thermal images on the hypothesis that inner leaf layer leaves are shaded and are cooler in a thermal image than exterior leaves which are directly exposed to the sun. Work has started on the final report.

Impacts
What was accomplished under these goals? Prototype Development Built up a prototype that can be taken into the vineyard for trials. This consisted of the Kinect2 camera, laptop computer, car battery and power adapter mounted on a mobile cart. Conducted initial tests with potted vines in the laboratory to establish the range, field of view, and number of scans needed for good operation in the field. Developed the image processing application needed to convert raw depth camera data into a form from which we can extract measures of vine canopy density. These processing steps include: Map Kinect2 pixel data to gray scale values. Background objects (>1.7m from camera) are mapped to '255' (white); Objects close to camera (0.513 to 0.7m) are mapped to '0' (black). Objects (leaves and clusters) in between (0.7-1.7m) are gray scaled from '1' to '254'. Reduce noise and blur by averaging and combining images from 8 scans. Assign each pixel to either vine or background (binary thresholding...white or gray or black) Assign a gray scale value from 0-255 to each pixel in field-of-view (FOV), allowing 256 unique intensities of depth. Locate leaves and gaps using "blob" detection, a comparison of multiple similar-gray-level pixels, compared to adjacent 'edge' pixels) Compute percent gaps = total area of gap "blobs"/total area of leaf "blobs" Compute leaf layer number = total leaf area/camera FOV. Field Trials In late summer, we imaged 60 vines of the cultivar Frontenac gris in two different commercial vineyards. Vines had been topped and had one or two rounds of removing lateral shoots. These were highly vigorous vines with large canopy, trained to Vertical Shoot Positioning. The vines were imaged from both sides. We collected measures of % gaps and number of leaf layers using two manual methods: the point quadrat and Smart's observational scorecard. Also measured total leaf area and derived leaf layer number using Schreiner's method. All data collection, both imaging and manual, was completed over a period of 3 days in the first vineyard and in one day in the second vineyard. Findings Robust to bright sunlight, wind, leaf wetness, and background. Good resolution at 1 meter from vine (midway between rows) Highly accurate measures of canopy gaps and canopy width. Measures of leaf layer number are more difficult to obtain; currently, we are comparing 3 different methods for computing leaf layers from images. Measurements using the 3D camera are an order of magnitude faster than manual measurements, even when the camera has to be moved manually from vine to vine.

Publications

  • Type: Conference Papers and Presentations Status: Published Year Published: 2015 Citation: Presented a poster about the project at the 4th Triennial Vitinord conference held in Nebraska City, NE during 11-14 November. The conference was attended by 250 people from 18 states and 11 countries. As a result of participating in the conference, we have identified at least one strong commercial winery field test partner for Phase 2.