Progress 08/01/18 to 07/31/19
Target Audience:We made consistent outreach efforts to a range of potential end-users throughout the reporting period. State agencies in California we have spoken to include CALFIRE (forest and wildfire), CA Natural Resources Agency, and CalTrans (transportation). Each of these state agencies has responsibility over land or right-of-way management, and having access to regular tree mortality monitoring is of high interest. We also had conversations with the US Forest Service Region 5 scientists, forest supervisors, and leadership. We held outreach discussions with conservation organizations like The Nature Conservancy and Sierra Nevada Conservancy whom either directly manage land or provide funding to land managers to improve forest stewardship. While not in the reporting period, but occurring in just two weeks (November 13-14, 2019), we were invited to present our California forest mortality mapping work to the annual meeting of the California Forest Pest Council. Changes/Problems:There were no major delays or disruptions in our research plan. However, we decided to extend our research timeline (see Accomplishments) to allow continued model experimentation to maximize the accuracy of our deep learning algorithms to distinguish tree mortality from living trees. Instead of the original project end date of March 31, 2019, we are targeting a project end date of January 31, 2020. What opportunities for training and professional development has the project provided?By engaging with stakeholders in the academic, non-profit, and government agency communities and demonstrating our mortality mapping capacity, we received two significant opportunities. First, we were invited to join a consortium of companies, universities, and non-profits to submit an (ultimately successful) competitive proposal for a large California Energy Commission grant to develop a next-generation wildfire risk model that accounts for extreme tree mortality. Second, the software capacity we gained during our Phase I project allowed us to demonstrate the application of deep learning to derive new ecological insights, such as mapping tree height. This ultimately led to our successful grant application to the Gordon and Betty Moore Foundation to build the California Forest Observatory--a real-time forest fuels and wildfire hazard mapping system. How have the results been disseminated to communities of interest?
What do you plan to do during the next reporting period to accomplish the goals?
What was accomplished under these goals?
Western US forest ecosystems are experiencing severe stress from drought, heat, fires, and pest outbreaks. Forest dieback, driven by high rates of tree mortality, is widespread across the and is expected to increase as these stresses shift in geographic range and intensity. Higher fire frequency and intensity, resulting from increasing rates of tree mortality, are expected to increase economic burdens on a wide range of institutions, including natural resource managers, firefighters, state and federal agencies, and homeowners. In this project, Salo Sciences built and tested a system for forest mortality monitoring at previously inaccessible scales by combining daily, high resolution satellite imagery with deep learning algorithms and extensive field observations. Our comprehensive (or "wall-to-wall") maps of mortality at the individual tree scale can be used to improve the coordination of forest management activities through precisely identifying areas facing the greatest fire hazard, optimizing timber salvage strategies, and forecasting future risks. Objective 1. Development of a stratified geographic sampling scheme, designed to sample imagery from the varied ecosystems and land use types across California, to capture a wide range of image variability for training a deep learning model. California has a diverse set of forest ecosystems with varying gradients of ecological, phenological, and morphological properties. In order for our AI algorithm to accurately map tree mortality anywhere in the state, it must be able to distinguish dead trees from the variations in these properties. To achieve this generalizability, the algorithm must be trained with satellite imagery from across these gradients. We built sampling software that randomly samples image tiles in equal proportions across a range of environmental patterns, as defined by the user. The software can aggregate tiles from multiple input sources for cases where we use more than a single source of satellite imagery or other datasets. These image tiles are then used as inputs into our mortality mapping AI algorithm. Objective 2. Collection of a robust set of field observations based on the stratified geographic sampling scheme to train a deep learning model. Field observations are central to validating the algorithm outputs. Field-verified data of tree mortality will be used to assess the accuracy of the final tree mortality maps produced by the algorithm. To gather enough field data, we completed a large field data collection campaign in October 2018. The PI and Co-I, along with 2 field assistants, mapped the location of individual dead tree stems using a GPS system linked to satellite imagery on handheld tablets. We identified each tree species when possible, and collected data on the approximate time since death. In total, we mapped more than 5,000 trees over 175 acres across multiple environmental gradients. While our original objective was to collect field data for algorithm training, these data are more appropriate for algorithm output validation. Objective 3. Standardization of image pre-processing steps, such as scaling RGB values, calculating indices (such as normalized ratios between bands), and performing noise removal. Preliminary research found we did not need to develop some preprocessing tasks, like atmospheric correction, since the corrections implemented by the data provider, Planet, were of high quality. Additionally, testing has shown that our algorithms are robust to the inter-scene and inter-sensor variation in Planet data. This reduces the need for noise removal and whole scene normalization, eliminating a huge pre-processing burden. Instead we focused on developing a standardized gridding system and a high-throughput image loading / data normalization pipeline to facilitate rapid image processing in a cloud computing environment. The gridding system reprojects and aligns data from multiple image sources into a standardized grid system. This system outputs consistent and spatially indexed imagery, ensuring perfect overlap between datasets. The image grids are designed to seamlessly feed image tiles into the dataloader. The dataloader in turn feeds images into the algorithm given a list of tile paths. By reading in the images real-time, we reduce memory overhead to allow much larger training datasets to be processed. The dataloader also performs per-tile image normalization on-the-fly, reducing the number of pre-processing steps and allowing flexibility in how and when we perform normalization. Objective 4. Optimization of the deep learning algorithm, including selection of algorithm architecture(s), image input size, and learning rate strategy. This objective forms the core of our work under Phase I. We have identified the appropriate family of convolutional neural network (CNN) architectures to test and developed the software to test different iterations of model structure, inputs, and model hyperparameters. As part of this testing phase, we experimented with varying image size and resolutions, combinations of image sensor data, loss functions, and model depths. Our results showed that we could achieve accuracies ranging from 70-85% in distinguishing tree mortality. However, our insights gained from the experimentation lead us to believe we can achieve much higher accuracy (approaching 95%). We received a no-cost extension of our Phase I project and continue these experiments to maximize the accuracy of our algorithm.