Source: SALO SCIENCES, INC. submitted to
MAPPING AND PREDICTING TREE MORTALITY USING HIGH-RESOLUTION NANOSATELLITE DATA FOR IMPROVED FOREST MANAGEMENT
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
EXTENDED
Funding Source
Reporting Frequency
Annual
Accession No.
1016103
Grant No.
2018-33610-28219
Project No.
CALW-2018-00556
Proposal No.
2018-00556
Multistate No.
(N/A)
Program Code
8.1
Project Start Date
Aug 1, 2018
Project End Date
Mar 31, 2020
Grant Year
2018
Project Director
Marvin, D. C.
Recipient Organization
SALO SCIENCES, INC.
3536 22ND ST
SAN FRANCISCO,CA 94114
Performing Department
(N/A)
Non Technical Summary
This project will enable forest monitoring at previously inaccessible scales by combining daily, high resolution satellite imagery with deep learning algorithms and extensive field observations. The innovation in this proposal is not in developing any one of the approaches; high resolution imagery has previously been used to map tree mortality, and deep learning has been used for object detection in high resolution remote sensing data. The innovation is in applying these approaches to noisy, high frequency, and high resolution satellite imagery to consistently and accurately detect fine-scale ecological patterns to monitor change over time. These analyses can be usedto inform land use planning and forest management. These tree mortality maps will improve the coordination of forest management activities through precisely identifying areas facing the greatest fire risk, optimizing timber harvest strategies, and forecasting future risks.
Animal Health Component
0%
Research Effort Categories
Basic
50%
Applied
25%
Developmental
25%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
12306131070100%
Goals / Objectives
California's forest ecosystems are experiencing severe stress from drought, heat, fires, and pest outbreaks. Forest dieback, driven by high rates of tree mortality, is widespread across the state and is expected to increase as these stresses shift in geographic range and in intensity. Higher fire frequency and intensity, resulting from increasing rates of tree mortality, are expected to increase economic burdens on a wide range of institutions, including natural resource managers, firefighters, state and federal agencies, and homeowners. Furthermore, they are expected to bear a high cost on human health and wellbeing. A preventative, targeted approach to fire management is expected to reduce the economic, health, and environmental costs related to forest dieback, and developing tools to monitor and predict the risks posed by tree mortality across the state will be necessary to adopt a preventative management system. Our project proposes to use high resolution, high frequency satellite imagery and deep learning algorithms to develop a novel tree mortality mapping methodology to: a) identify dead trees at high spatial resolution; b) identify the approximate time of death at high temporal resolution, and; c) forecast future mortality risk at moderate spatial resolution through integration with climate and land cover data. We plan to commercialize the products developed in this proposal through sales of data and analyses to large landowners across California. We believe that, once mature, this approach can scale beyond California and be applied in other states with large forests and high tree mortality rates.
Project Methods
We propose to develop a prototype mapping methodology to identifyindividual- and stand-level tree mortality across California at high resolution, high frequency, and low cost. Meetingthe above requirements will require a big data-style approach to analysis and operations. We plan to achieve this bycombiningdaily, high resolution nanosatellite imagery with deepneural networkalgorithms and an extensive series of field observations of tree mortality.First, we plan to sample field sites and imagery using a stratified geographic sampling of California's ecosystems.We will use spatially-explicitobservations of live and dead trees to extract image data for model training.We plan to utilize novel methods for training neurtal networkstopredict tree mortality from image data extracted fromfield collections.The goal of our proposal is topredict fire risk, target individual trees for removal, andmap temporal trends in tree mortality using high resolution,regularly updated maps of tree mortality.

Progress 08/01/18 to 07/31/19

Outputs
Target Audience:We made consistent outreach efforts to a range of potential end-users throughout the reporting period. State agencies in California we have spoken to include CALFIRE (forest and wildfire), CA Natural Resources Agency, and CalTrans (transportation). Each of these state agencies has responsibility over land or right-of-way management, and having access to regular tree mortality monitoring is of high interest. We also had conversations with the US Forest Service Region 5 scientists, forest supervisors, and leadership. We held outreach discussions with conservation organizations like The Nature Conservancy and Sierra Nevada Conservancy whom either directly manage land or provide funding to land managers to improve forest stewardship. While not in the reporting period, but occurring in just two weeks (November 13-14, 2019), we were invited to present our California forest mortality mapping work to the annual meeting of the California Forest Pest Council. Changes/Problems:There were no major delays or disruptions in our research plan. However, we decided to extend our research timeline (see Accomplishments) to allow continued model experimentation to maximize the accuracy of our deep learning algorithms to distinguish tree mortality from living trees. Instead of the original project end date of March 31, 2019, we are targeting a project end date of January 31, 2020. What opportunities for training and professional development has the project provided?By engaging with stakeholders in the academic, non-profit, and government agency communities and demonstrating our mortality mapping capacity, we received two significant opportunities. First, we were invited to join a consortium of companies, universities, and non-profits to submit an (ultimately successful) competitive proposal for a large California Energy Commission grant to develop a next-generation wildfire risk model that accounts for extreme tree mortality. Second, the software capacity we gained during our Phase I project allowed us to demonstrate the application of deep learning to derive new ecological insights, such as mapping tree height. This ultimately led to our successful grant application to the Gordon and Betty Moore Foundation to build the California Forest Observatory--a real-time forest fuels and wildfire hazard mapping system. How have the results been disseminated to communities of interest? Nothing Reported What do you plan to do during the next reporting period to accomplish the goals? Nothing Reported

Impacts
What was accomplished under these goals? Western US forest ecosystems are experiencing severe stress from drought, heat, fires, and pest outbreaks. Forest dieback, driven by high rates of tree mortality, is widespread across the and is expected to increase as these stresses shift in geographic range and intensity. Higher fire frequency and intensity, resulting from increasing rates of tree mortality, are expected to increase economic burdens on a wide range of institutions, including natural resource managers, firefighters, state and federal agencies, and homeowners. In this project, Salo Sciences built and tested a system for forest mortality monitoring at previously inaccessible scales by combining daily, high resolution satellite imagery with deep learning algorithms and extensive field observations. Our comprehensive (or "wall-to-wall") maps of mortality at the individual tree scale can be used to improve the coordination of forest management activities through precisely identifying areas facing the greatest fire hazard, optimizing timber salvage strategies, and forecasting future risks. Objective 1. Development of a stratified geographic sampling scheme, designed to sample imagery from the varied ecosystems and land use types across California, to capture a wide range of image variability for training a deep learning model. California has a diverse set of forest ecosystems with varying gradients of ecological, phenological, and morphological properties. In order for our AI algorithm to accurately map tree mortality anywhere in the state, it must be able to distinguish dead trees from the variations in these properties. To achieve this generalizability, the algorithm must be trained with satellite imagery from across these gradients. We built sampling software that randomly samples image tiles in equal proportions across a range of environmental patterns, as defined by the user. The software can aggregate tiles from multiple input sources for cases where we use more than a single source of satellite imagery or other datasets. These image tiles are then used as inputs into our mortality mapping AI algorithm. Objective 2. Collection of a robust set of field observations based on the stratified geographic sampling scheme to train a deep learning model. Field observations are central to validating the algorithm outputs. Field-verified data of tree mortality will be used to assess the accuracy of the final tree mortality maps produced by the algorithm. To gather enough field data, we completed a large field data collection campaign in October 2018. The PI and Co-I, along with 2 field assistants, mapped the location of individual dead tree stems using a GPS system linked to satellite imagery on handheld tablets. We identified each tree species when possible, and collected data on the approximate time since death. In total, we mapped more than 5,000 trees over 175 acres across multiple environmental gradients. While our original objective was to collect field data for algorithm training, these data are more appropriate for algorithm output validation. Objective 3. Standardization of image pre-processing steps, such as scaling RGB values, calculating indices (such as normalized ratios between bands), and performing noise removal. Preliminary research found we did not need to develop some preprocessing tasks, like atmospheric correction, since the corrections implemented by the data provider, Planet, were of high quality. Additionally, testing has shown that our algorithms are robust to the inter-scene and inter-sensor variation in Planet data. This reduces the need for noise removal and whole scene normalization, eliminating a huge pre-processing burden. Instead we focused on developing a standardized gridding system and a high-throughput image loading / data normalization pipeline to facilitate rapid image processing in a cloud computing environment. The gridding system reprojects and aligns data from multiple image sources into a standardized grid system. This system outputs consistent and spatially indexed imagery, ensuring perfect overlap between datasets. The image grids are designed to seamlessly feed image tiles into the dataloader. The dataloader in turn feeds images into the algorithm given a list of tile paths. By reading in the images real-time, we reduce memory overhead to allow much larger training datasets to be processed. The dataloader also performs per-tile image normalization on-the-fly, reducing the number of pre-processing steps and allowing flexibility in how and when we perform normalization. Objective 4. Optimization of the deep learning algorithm, including selection of algorithm architecture(s), image input size, and learning rate strategy. This objective forms the core of our work under Phase I. We have identified the appropriate family of convolutional neural network (CNN) architectures to test and developed the software to test different iterations of model structure, inputs, and model hyperparameters. As part of this testing phase, we experimented with varying image size and resolutions, combinations of image sensor data, loss functions, and model depths. Our results showed that we could achieve accuracies ranging from 70-85% in distinguishing tree mortality. However, our insights gained from the experimentation lead us to believe we can achieve much higher accuracy (approaching 95%). We received a no-cost extension of our Phase I project and continue these experiments to maximize the accuracy of our algorithm.

Publications