Recipient Organization
PENNSYLVANIA STATE UNIVERSITY
408 Old Main
UNIVERSITY PARK,PA 16802-1505
Performing Department
Agricultural & Biological Engi
Non Technical Summary
The rapid application of science and technology on the farm has made the US the most food secure nation the world has ever seen. In order to lead the development of science and technology on the farm, new tools must continually be developed to enhance human performance and productivity in agriculture. This is especially important as climate change is leading to more frequent extreme weather events and globalisation is leading to an increasing number of novel pathogens and pests. Given constraints on expanding the human expertise workforce (e.g. extension officers) cheap and efficient technology is required to monitor agricultural land to assess potential threats and enable timely management while improving land productivity. Unmanned aerial vehicles (UAVs) coupled with deep learning models offers the potential to extend the monitoring capacity of farmers in the US to the assess and manage their land. My preliminary research has shown deep learning models can detect healthy and diseased plants with over 90% accuracy using in-field images of plant leaves collected with digital cameras. In this project, I propose to investigate the performance of UAVs in collecting image datasets of agricultural land to support automated plant disease diagnoses with deep learning models. I plan to 1) collect image datasets of diverse crops and stressors at the Russell E. Larson Agricultural Research Station from different heights/image resolutions for a range of spectral bands to train deep learning models to automatically diagnose plant diseases, 2) compare the improvement in diagnosing plant health by collecting image datasets with UAVs to the 3m resolution satellite data available from Planet.com and 3) build a workflow to train agricultural extension staff to use UAVs and deep learning models for automated plant disease detection.
Animal Health Component
100%
Research Effort Categories
Basic
(N/A)
Applied
100%
Developmental
(N/A)
Goals / Objectives
The goals of this project are to investigate the performance of UAVs in collecting image datasets of agricultural land to support automated plant disease diagnoses with deep learning models. If unmanned aerial vehiclaes (UAVs) can efficiently collect images of crops and these can be diagnosed with very high accuracy, they will extend the human capacity to monitor and assess crop health leading to increase land productivity in the US. To investigate this new tool I will address three key objectives:1: Investigate the ideal UAV image datasets to detect incidents of plant diseases.2. Examine the potential improvements in detecting incidents of plant diseases earlier in the field with vegetation indices (VI) datasets compared to RGB datasets.3: Build a workflow to teach and train new technology
Project Methods
Objective 1: Investigate the ideal UAV image datasets to diagnose plant diseases.UAVs can be flown within a range of heights less than 400 ft above ground level in open airspace (Class G) and carry cameras that capture spectral reflectance measurements in the visible, red edge, and near infrared. There are inherent trade-offs to monitoring fields with UAV technology including flight time limitations due to battery power, flight time to collect data (dependent on weather conditions such as wind speed and flight height), camera weight (5 band camera versus the standard RGB camera) and the size of image datasets (requiring larger computer systems for analysis and data storage). To analyze the effects of UAV flight parameters on the performance of UAV datasets to diagnose plant diseases, I will build deep learning models with a range of UAV datasets. The UAV parameter I will test for this objectiveis flight height (200 ft, 100 ft, 50 ft, 25 ft, 15 ft) and I will record flight time, battery usage, dataset size (dependent on percent sidelap and frontlap of images taken). These parameters will be tested with an RGB sensor camera (e.g. 1/2.3" CMOS, Effective pixels: 12.4 M (total pixels: 12.76 M), lens FOV 94° 20 mm (35 mm format equivalent) f/2.8 focus at ∞). Flights will be done twice per week. The image datasets will be used to build deep learning models in the TensorFlow platform and model results will be evaluated for different datasets.Objective 2: Examine the potential improvements in diagnosing plant diseases earlier in the field with vegetation indices (VI) datasets compared to RGB datasetsFor this objective I will repeat the flight heights of objective 1 with a MicaSense RedEdge camera (blue, green, red, red edge, near IR (global shutter, narrowband), 8.1 cm per pixel) attached to a UAV. I will record the flight time, battery usage, dataset size (dependent on percent sidelap and frontlap of images taken) for the collection of this newdataset. The image datasets will be used to build deep learning models in the TensorFlow platform and model results will be evaluated for different datasets. Because NIR reflectance is more strongly related to plant stress than RGB reflectance, NDVI calculated with NIR has the potential to identify problems days before they might appear with VARI. This would suggest that UAV image datasets with NIR data can be used to diagnose plant diseases earlier in the seasonthan VARI.