Source: ALLYKE LLC submitted to NRP
VIRTUOSO: VISUAL RECOGNITION OF PESTS FOR CROP SCOUTING
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
COMPLETE
Funding Source
Reporting Frequency
Annual
Accession No.
1009545
Grant No.
2016-33610-25476
Cumulative Award Amt.
$100,000.00
Proposal No.
2016-00890
Multistate No.
(N/A)
Project Start Date
Aug 1, 2016
Project End Date
Mar 31, 2017
Grant Year
2016
Program Code
[8.13]- Plant Production and Protection-Engineering
Recipient Organization
ALLYKE LLC
14 MICA LANE SUITE 103
WELLESLEY,MA 02481
Performing Department
(N/A)
Non Technical Summary
Agriculture has seen rapid development in both the quantity and quality of food production; however, the presence of pests and disease on crops can hamper the quality of agricultural pro-duce and can have a devastating effect on a farmer's bottom line.To combat the risk of pest infestations and disease, crop producers rely on a process known as crop scouting. Field-based crop scouting entails walking and surveying crop fields for yield reducing pests (insects, weeds, and diseases) and determining when control strategies must be taken.Typically, scouting is performed by either the farm operator or a contract service provided by local farm co-operatives or crop consulting businesses. The crop scouts hired by these contract service providers are often college interns trained through brief "scouting schools" and then sent into fields equipped with a paper-based scouting report and manuals to use for field pest identifi-cation. This traditional approach of crop scouting is cumbersome, time-consuming, inefficient, and prone to inaccurate pest identification.As smartphones and tablets are becoming more entrenched in the daily life of agricultural production, mobile-based scouting software apps have emerged. Though these apps provide a more user-friendly system to record and manage field data, pest identification remains ineffi-cient, usually accomplished by answering dozens of questions. There is a critical need for auto-mated techniques to improve a user's scouting experience by making the path to identifying weeds, insects, or crop disorders easier, faster, and far more intuitive than at present.Allyke proposes VIRTUOSO (Visual Recognition of Pests for Crop Scouting), an image analysis technology for automatically identifying pests (insects, weeds, and diseases) during field-based crop scouting. VIRTUOSO accelerates the tedious, manual process of pest identifica-tion. Rather than scouring through large field pest identification manuals or answering dozens of often ambiguous questions posed by a mobile crop scouting app, VIRTUOSO helps crop scouts by automatically identifying the pest or crop disease from a photograph. VIRTUOSO uses ma-chine learning to learn a hierarchy of features that unveil salient feature patterns and hidden structure in the data. Each layer leads to progressively more abstract features at higher levels of the hierarchy. As a result, the learned representations are richer than existing handcrafted image features, making it easier to extract useful data when building classifiers or other predictors.Allyke has partnered with ScoutPro, a leading provider of mobile agricultural apps for crop scouting. ScoutPro has already established a critical mass of users for corn and soybean crop scouting applications. VIRTUOSO will be a cloud-based Software as a Service (SaaS) with a RESTful application programming interface (API). When integrated with ScoutPro's apps, we hypothesize that VIRTUOSO will not only improve scouting efficiency, but also the accuracy of the pest identification. It may also mean scouting happens more regularly, given the ease. Farmers will have a real-time understanding of crop pest pressures and gain the ability to make decisions more effectively, whether marketing their product, managing their risk, or just understanding the crop pest issues impacting crop production.The proposed innovation will not only represent a substantial breakthrough through the effec-tive use of "Big Data" within agriculture and the USDA, but also other application domains where large image datasets are prevalent, including social media, retail, robotics, and medicine. It will enable the automatic derivation of analysis products that allow practitioners to quickly ana-lyze data from new domains while greatly minimizing human effort.
Animal Health Component
50%
Research Effort Categories
Basic
(N/A)
Applied
50%
Developmental
50%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
2140199208025%
2130199208025%
2110199208025%
2120199208025%
Goals / Objectives
The overarching goal of the proposed Phase I work is to design and demonstrate the feasibility of a visual recognition engine for pest identification. Specific objectives include:Requirements analysis: Allyke and ScoutPro will meet with the sponsor to refine system requirements, performance metrics, and image datasets of interest toward developing a preliminary requirements specification. Allyke will establish a prioritized set of functional requirements for the proposed system. This is best accomplished via scenario construction wherein hypothetical use of the proposed system most closely resembles real-world operational use.Data acquisition: Allyke and ScoutPro will collect images from several data sources in order to train and evaluate the proposed system.Design a deep learning architecture for pest recognition: Allyke will design a deep learning architecture that extracts salient features from photographs for identifying pests (weeds, insects, diseases) that afflict corn and soybean crops.Demonstrate learned features for image analysis tasks: Allyke will demonstrate that the learned feature representation can be applied to image analysis tasks such as image classification and retrieval. The team will show that pest imagery can be modeled using the learned representation and image indexing techniques can be applied for fast and efficient image search. In addition, the team will show the learned representation can also be used to build image classifiers representing scenarios of interest to crop scouting.Design backend application and API: Allyke will design a back-end server based application to manage the imaging pipeline. In addition, we will design a web services API to allow seamless integration with third party applications.Evaluation: Allyke will design a system performance test plan that assesses the performance of the system subcomponents as well as the complete system. In addition, Allyke will define a set of demonstration success criteria which will include datasets and metrics to assess the image search and classification algorithms.
Project Methods
Below is an outline of the specific methods/tasks for this effort:1. Define requirements (in collaboration with ScoutPro)Conduct a kickoff meeting with the sponsor and ScoutPro to identify the require-ments for each task as well as a cost/benefit analysis to help guide development ef-fortsPerform operational task analysis to gain a better understanding of use of the technol-ogy2. Data acquisition (in collaboration with ScoutPro)Gather and sort data for training and evaluation.Truth data, which involves manually specifying true semantic labels for each example.3. Design a deep learning architecture for pest recognitionEvaluate various transfer learning techniques. This might include techniques such as domain adaptation, network fine-tuning, and/or "network surgery"Train models4. Demonstrate learned features for image analysis tasksDemonstrate on visual searchDemonstrate on image classification of top 25 pests5. Design backend application and APIDefine and implement APIDesign web-based application to manage the image processing pipelineIntegration with ScoutPro app for proof of concept tool6. EvaluationDefine performance metrics. Common performance metrics include accuracy, sensitivity/recall, precision/positive predictive value, F-measureEvaluate component algorithms and document results

Progress 08/01/16 to 03/31/17

Outputs
Target Audience:Allyke has contacted potential partners/customers within precision agriculture. Our primary focus has been on forward thinking partners who have expressed interest in beta testing VIRTUOSO during the 2017 growing season. Data collected will help us determine real-world performance, gather end-user feedback, and refine the product offering throughout the proposed USDA Phase II effort. In addition to our partnership with ScoutPro, several other companies and individuals have indicated, in many cases through multiple interactions, a desire to develop a business arrangement with Allyke to integrate VIRTUOSO within their software applications and platforms. These companies includeFarm Market ID,Ohio State University, Growmark, iNet Solutions Group, Neucadia, Farmers Edge, Farm Works (Trimble), ScoutPro and SST Software Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided? Nothing Reported How have the results been disseminated to communities of interest?Allyke has contacted potential partners/customers within precision agriculture. Our primary focus has been on forward thinking partners who have expressed interest in beta testing VIRTUOSO during the 2017 growing season. Data collected will help us determine real-world performance, gather end-user feedback, and refine the product offering throughout the proposed USDA Phase II effort. In addition to our partnership with ScoutPro, several other companies and individuals have indicated, in many cases through multiple interactions, a desire to develop a business arrangement with Allyke to integrate VIRTUOSO within their software applications and platforms. What do you plan to do during the next reporting period to accomplish the goals? Nothing Reported

Impacts
What was accomplished under these goals? Requirements Analysis:Allyke and ScoutPro held a kickoff meeting to refine the system requirements, define the performance metrics, and discussing the available ScoutPro datasets. Data Acquisition: Allyke demonstrated the feasibility of VIRTUOSO using two crop disease datasets: PlantVillage, a publicly available, curated dataset of over 50,000 images of 38 crops and diseases, and an in-situ dataset of over 26,000 images of 20 soy and corn crop diseases collected by ScoutPro. The PlantVillage dataset consists of images captured in a laboratory with a constant background and with a consistent viewpoint and lighting. The goal for this dataset was to show that VIRTUOSO is capable of recognizing a wide variety of diseases. In contrast, the images collected as part of the ScoutPro dataset more closely represent a real-world setting that are taken in the field and exhibit a wide variety of viewpoint and lighting conditions and backgrounds. Note that these images were captured without any guidance to the user. Design a deep learning architecture for pest recognition: A. Network Architecture Allyke built upon two leading Convolutional Neural Network (CNN) architectures, to perform image classification. Both these architectures achieved the state-of-the-art performance on the 2014 and 2015 ImageNet classification challenges.Allyke implemented and tested these architectures using Caffe, an open-source implementation of a CNN. Caffe's modular architecture enables rapid deployment with networks specified as simple configuration files and features a GPU mode for accelerated training and testing. B. Image Pre-Processing Allyke evaluated different image pre-processing steps as part of the system design. The first is mean image subtraction whereby the mean training image is subtracted from the input image supplied to the classifier. This is a common pre-processing step used by deep learning classification architectures to normalize the data and adjust for lighting differences. The second pre-processing method is to convert color images to grayscale. Color images convey more information; however, they tend to be more susceptible to lighting variations. We evaluated both color and grayscale classification to determine whether color information is useful for the crop disease classification task. C. Learning Algorithm During Phase I, Allyke evaluated two different learning algorithms for training the CNN classifiers: 1) learning from scratch that randomly initializes the network layer weights and learns them a-new and 2) network fine-tuning that uses network weights from a CNN that is pre-trained on a very large dataset to initialize learning for the task of interest. The latter technique, known as transfer learning. In practice, it is relatively rare to have a dataset of sufficient size to train an entire CNN from scratch. D. Classification Strategy Allyke compared the learned CNNs to k Nearest Neighbor (k-NN) classification using the pre- trained network image features. The k-NN classifier uses the final feature layer output by each network to compute the similarity between images. Classification is then performed by voting using the category labels of an image's k nearest-neighbors. k-NN defines a simpler classification strategy that does not require any network re-training and instead replaces the linear classification layer of the pre-trained network with a non-linear voting step. k-NN was considered as a baseline approach to gauge the effectiveness of the learned CNN classifiers. Evaluation: Allyke defined an initial set of metrics targeted at evaluating the classification performance. These metrics include: Top-n accuracy: Top-n accuracy measures the percentage of correctly classified images. An image is deemed correctly classified if the correct category label matches one of the top­­-n most probable category predictions. Recall: Recall corresponds to the correct detection rate relative to ground truth. It is the percentage of correctly detected instances out of all true instances of a particular class, averaged over all instances. Precision: Precisionmeasures the likelihood that a detected instance corresponds to a real occurrence. F1 score: The F1 score combines the precision and recall rates into a single measure of performance, computed as the geometric mean between precision and recall. First, we evaluated the different design components of VIRTUOSO's deep learning framework using PlantVillage to establish a baseline performance. Then, we evaluated the system on the challenging ScoutPro dataset which represents data collected from actual scouting missions. Demonstrate learned features for image analysis tasks A. PlantVillage Results We randomly divided the images into training and test sets of increasing sizes to show that performance improves with more training data. Allyke experimented with different training and test splits using mean image subtraction and full network fine-tuning. The best performing model achieves a classification accuracy of 99.6%. Our experiments showed that fine tuning results in a significant improvement over learning from scratch, particularly for smaller training set sizes. In addition, our resultsshowed that color image classification performs the best. Fine tuning the full network performed significantly better than only learning the final classification layer of the pre-trained network. Our experiments show that the CNN classifiers far outperform k-NN classification, the best performing k-NN classifier achieving a 93.8% top-1 accuracy. Both networks achieve a top-5 accuracy of near-100%. Finally, we performed an error analysis to determine the most common mistakes on this dataset. Corn gray leaf spot and northern blight are the most confused crop disease categories in both networks, and the second most confused categories are grape esca and black rot. The mistaken images in these categories are difficult to visually differentiate, even for a human observer. B. .ScoutPro Results We compared the best performing settings of the Architecture Aand Architecture Bfound from PlantVillage on the ScoutPro dataset. Again, we split the dataset into training and test splits of increasing sizes.Architecture A achieves the performance, obtaining a 77% top-1 classification accuracy. Using relatively few training images, VIRTUOSO achieves an impressive top-3 and top-5 accuracy of 93.3% and97.5%, respectively, on this challenging dataset. As the number of training images increase, we anticipate that the performance will only get better. An error analysis was performed to identify the common mistakes. Errors tend to arise from poor image capture. In many instances, the correct crop is identified; however, the occlusions or poor viewpoint makes identification of the crop disease challenging from these images. These observations suggest that photos captured in a more consistent manner, in addition to more images, will help bridge the gap between the laboratory and real-world performance. Design backend application and API: During Phase I, Allyke developed the back-end infrastructure to manage the image analysis pipeline and store the query image and analysis results. This infrastructure resides on the Amazon Web Services (AWS) cloud computing platform. The platform provides the potential to seamlessly scale the back-end architecture (number of servers, databases, and other resources) based on user traffic. This server is developed in JavaScript and utilizes the Node.js run-time environment and the Express application framework.

Publications


    Progress 08/01/16 to 03/31/17

    Outputs
    Target Audience:During this reporting period, Allyke held a kickoff meeting on Aug 12 with ScoutPro, our collaborator and potential future customer. In this meeting, we reviewed the Phase I objectives, data requirements, and work plan. On Sept 21, Allyke held a meeting to review our development progress with ScoutPro. In this meeting, we shared our preliminary results on the PlantVillage dataset, a curated dataset of over 50,000 images of 38 crops and diseases. ScoutPro was excited about these initialresults and sent us 30,000 images of corn and soy crop diseases to evaluate. We are currently performing an analysis on this highly challenging dataset. On Oct 14, the Allyke team presented our initial results to a group of ConAgra and Gavilon executives. The purpose of this meeting is to explore commercialization opportunities in the Ag space. On December 1, the Allyke team presented our initial results to Trimble Navigation Ltd Farm Works team. Trimble is interested in incorporating Allyke's image based search into their Farm Works scouting app. On December 8, SST Software, makers of various software applications for crop scouting, signed a letter of intent to pilot a beta version of VIRTUOSO. Legal and business discussions are underway. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided? Nothing Reported How have the results been disseminated to communities of interest? Powerpoint presentations to ScoutPro, a Agriculture mobile app developer, and former executives at Gavilon and ConAgra. Presentation to Farm Works, a division of Trimble Navigation Ltd. VIRTUOSO's image based search could be integrated intoTrimble's Farm Works app. Drafting a letter of intent with Farmers Edge Inc. VIRTUOSO would beintegratedinto Farmers Edge mobile application SST Software agreed to sign a letter of intent. VIRTUOSO would be integrated into SST's scouting platform. Exploring partnership opportunities with Farm Market ID. What do you plan to do during the next reporting period to accomplish the goals?During the next reporting period, Allyke will continue to explore commercialization opportunities of the VIRTUOSO technology.

    Impacts
    What was accomplished under these goals? During this reporting period, Allyke accomplished the following objectives: Requirements Analysis:Allyke identified two key datasets of interest: PlantVillage and ScoutPro Corn and Soy. These datasets represent lab setting and real-world imaging conditions respectively. In addition, Allyke defined a set of metrics to evaluate the proposed algorithms. The metrics include: the mean F1 score, mean precision, mean recall, overall top-1, top-3, and top-5 accuracy. Data Acquisition:Allyke downloaded and preprocessed the PlantVillage dataset. ThePlantVillage dataset is an open access database of 50,000+ images of healthy and diseased crops. Design a deep learning architecture for pest recognition:Allyke designed two network architectures for pest recognition. The first architecture is based on theGoogLeNet architecture. The key innovation with this architecture compared to it's predecessors is the use of successive small kernel convolutions to increase the local receptive field size, thus increasing the computational efficiency. In addition, this architecture introduced intermediate loss layers due to the increased depth of the architecture. These loss layers ensures that the gradients back propagate all the way up to the initial layers during training. Finally, we designed an architecture based on the award winning ResNet deep net. These "residual" networks repeatedly stack smaller architectures and route the inputs to deep layers in the network. The state-of-the-art in deep learning networks for generalized image recognition is based on the ResNet architecture. Demonstrate learned features for image analysis task:Allyke trained each network on the PlantVillage dataset using two differenttraining policies: training from scratch and transfer learning. With transfer learning,given an already learned model, we adapt the architecture and resume backpropagation training from the already learned model weights. One can fine-tune all the layers of the CNN or keep some of the earlier layers fixed (due to overfitting concerns) and only fine-tune some higher-level portion of the network. This is motivated by the observation that the earlier features of a CNN contain more generic features (e.g. edge detectors or color blob detectors) that should be useful to many tasks, but later layers of the CNN becomes progressively more specific to the details of the classes contained in the original dataset. In addition, Allyke developed a k-nearest neighbor classification algorithm to compare with the classification system inherent in the network. Evaluation:Allykesplit Plant village images into 20%, 40%, 50%, 60%, and 80% train and test data sets and reported the Top-1 classification setup. We evaluated and compated the GoogleNet and ResNet network architectures along the following criteria: Network setup: with and without mean image subraction Learning algorithm: transfer learning vs learning from scratch Image modality: color vs grayscale Classifier: k-nearest neighbor vs CNN We then performed an error analysis to determine the most common mistakes. Below is a summary of the results. Mean image subtraction wasbeneficial for GoogLeNet especially for smaller training set sizes, ResNet remains relatively un-affected. Fine-tuning each network significantly increases performance over classifier only learning Color ResNet with transfer learning is marginally better than Color GoogLeNet in most cases, however, with 2x morefeatures. Most common mistake is corn gray leaf spot vs. northern leaf blight, second most common mistake is grape esca vs. black rot GoogleNet had 35 misclassifications out of 10,830 test images, and ResNet had27 misclassified out of 10,830 images ResNet does a bit better than GoogLeNet with 2x more features, achieving an overall accuracy of 99.6% The ScoutPro dataset consisted of 26,000+ images of 12 corn and 8 soy crop disease.Unlike PlantVillage, the ScoutPro dataset representsreal-worldconditions of plantsimagedin-situ with a large degree of viewpoint andlighting variation, and with varyingbackgrounds and occlusion. Allyke split the ScoutPro dataset into 25%, 50% and 75% train and test sets andreported the Top-1 through Top-5 classification performance alogn with mean F1 score, mean recall and mean precision. We compared theGoogLeNet and ResNet classifiers using thebest performing settings foundfrom PlantVillage and performed an error analysis to understand the most common mistakes. Below isa summary of the results: Allyke's auto-grouping capability was shown useful for dataset curation to help group and remove outlier images GoogLeNet achieved the best performance with a Top-1 accuracy of 77% and a Top-5 accuracy of 97.5% For soy- and corn-only crop disease classificationGoogLeNet achieved Top-5 accuracies of 97.2% and 98.3% Many high performing classes exhibit a consistent viewpoint and/or background Future work is to formualte guidelines that help scouts take better pictures and incorporate feedback to improve results

    Publications