Progress 09/01/18 to 04/30/19
Outputs Target Audience: The target audience for this project is USDA researchers and technologists with an interest in improving farm analytics data collected from airborne and drone instruments. Ultimately ths program will result in farmers and farm services analytic data providers increasing available flight days by 20-60%, depending on region and season, improving the value of drone systems and improving the level of crop intelligence available. Changes/Problems: The original Phase I concept called for mapping clouds near the sun that cast shadows within our primary data collection system's nadir field of view (FOV) and projecting them onto a map of the ground beneath the data collection sensor, to produce an illumination correction that could be applied to the data in post-processing. While the approach is feasible, we have determined that the baseline measurements required are difficult, function well only over limited conditions, and add complexity to typical drone flight. However, we have developed an alternative concept that reduces the complexity of the measurement, by directly observing and quantifying the cloud shadowed illumination projected onto the ground. What opportunities for training and professional development has the project provided?
Nothing Reported
How have the results been disseminated to communities of interest?
Nothing Reported
What do you plan to do during the next reporting period to accomplish the goals? The goal of Phase II is to develop a prototype of the cloud shadow correction system, including required hardware and software components. Our goal is to use off the shelf hardware to the extent possible for the prototype, and if feasible, use the existing drone camera and related GPS, timing and inertial navigation data. Most drones used for remote sensing have an onboard RGB camera for context viewing that could be appropriated. In some cases, this camera may be used as the primary data collection system, and we may then require an additional WFOV imager. The major focus of the Phase II will be development and testing of specialized software to georeference and overlay imagery from the moving platform, detect frame-to-frame changes indicating cloud shadow motion, build up a temporal cloud shadow map including attenuation levels, and apply maps to data in a correction algorithm.
Impacts What was accomplished under these goals?
Intermittent cloud shadowing on the ground during an image data collect from a manned aircraft or drone platform results in variability in solar illumination of the data, often rendering it unusable by current data processing routines. Shadows affect the ability to compare imagery, including multispectral and hyperspectral imagery (MSI and HSI), taken over periods of time. Furthermore, shadows affect the ability to quantify spectral image data analytics, specifically vegetative indices based on molecular absorption of light in characteristic spectral bands, that form sophisticated metrics for crop health, water stress, pigments, diseases, and soil nutrients. Recognizing that direct observation of cloud shadows on the ground is hindered by ground clutter, the original Phase I concept called for an all-sky imager to map the sky during data collection and project the illumination correction onto the 'primary' image data. Critical components of such a system include the acquisition and storage of sky imagery, algorithms to project and transform the imagery into quantifiable ground irradiance maps that account for shadow, and finally application of irradiance maps to correct cloud shadow in the simultaneously collected primary image data. The key objective of Phase I was to establish this approach. To fulfill the Phase I objective, we undertook a number of tasks, including simulation of sky and resultant cloud shadow imagery used in the proposed approach, development of essential algorithms for cloud projection, acquisition of field data to demonstrate the approach in real world conditions. All the tasks were completed, and are described in this report. However, the key result of our Phase I study was to substantially revise the approach to cloud shadow mapping and correction we have proposed for Phase II. We were able to demonstrate key components of this new approach in Phase I and, based on the results, complete the final technical task; specify a Phase II system. The original Phase I concept called for mapping clouds near the sun that cast shadows within our primary data collection system's nadir field of view (FOV) and projecting them onto a map of the ground beneath the data collection sensor, to produce an illumination correction that could be applied to the data in post-processing. While the approach is feasible, we have determined that the baseline measurements required are difficult, function well only over limited conditions, and add complexity to typical drone flight. However, we have developed an alternative concept that reduces the complexity of the measurement, by directly observing and quantifying the cloud shadowed illumination projected onto the ground. Quantification of the cloud shadow requires discrimination of the shadow from a widely varying, cluttered background. Our new Phase II approach accomplishes this discrimination using the shadow motion as it traverses the ground. This requires a very wide field of view that covers a significant swath of the field being measured, not just the instantaneous footprint of the primary image collection system, so that we can capture every pixel in the field in sunlit and changing shadow conditions. The ShadowMapper camera takes imagery of the field at intervals of a few seconds, and the changes in illumination across the field are captured and extracted using change detection algorithms. Once the shadow radiance is mapped onto the primary data, we use the differences to make adjustments to radiance measured by the primary system, or subsequent reflectance determined using atmospheric correction software. The plan for the Phase II system includes a number of processes. 1) A time sequence of WFOV ground images is collected at the same time as our primary data collection. 2) Change detection algorithms are used to discriminate cloud shadow positions and intensities as they pass over the ground. 3) This information is correlated with imagery of the narrower field, primary data collection device to produce a frame illumination mask. 4)This in turn is applied as a correction to the radiant flux obtained with the primary device. 5) The corrected crop imagery data can then be used for crop analytics, such as vegetative indices, which ultimately produce intelligence of relevance to the farmer. The Phase I project was critical to arriving at the proposed Phase II concept. In Phase I we demonstrated the methods needed to project sky illumination onto the ground using sky images to determine the cloud altitudes and subsequently diffuse radiance due to clouds. This exercise clarified the required algorithms, parameters and limitations of our initial approach. The field demonstrations, from the ground and from our Matrice 600 drone, introduced us to the hardware and inspired the measurement approach that we have adopted for Phase II.
Publications
|