Recipient Organization
Carnegie Mellon University
5000 Forbes Avenue
Pittsburgh,PA 15213-3815
Performing Department
(N/A)
Non Technical Summary
Project Summary: Rapid Assessment of Wildland Fire Position and Plume Dynamics using Coordinated Multi-UAS SensingPIs: Sebastian Scherer, Katia Sycara, Ioannis Gkioulekas (Robotics Institute, CMU)Overview:Current application of robotics to situational awareness in wildland fires is limited to high altitude observations from Unmanned Aerial Systems (UAS), thus restricting the needed resolution for usable situational awareness. We propose to develop UASs to safely navigate near the ground through dense smoke and obstacles to reconstruct a high-resolution predictive model of the fire plume and 3D environment. Such integrated capability would transform current firefighting operations by providing timely information on safety of escape routes and fire direction, thus decreasing uncertainty and increasing firefighter safety as well as enable new insights for fire and smoke science.Intellectual Merit:We propose a 3-year collaborative integrated research project that brings experts from computational imaging and optics, multi-robot collaboration, adaptive sampling, safe robot navigation, and integrates feedback from Subject Matter Experts (SMEs) in the wildland fire management field. A large proportion of injuries and loss of firefighter lives occur because of poor situational awareness of the safety of escape routes, accurate prediction of the position and intensity of the fire. To address limitations of current technology we propose a three-pronged approach that integrates: (1) model reconstruction of fire plume dynamics via collaboration of multiple UAS, integrating for the first time, nano-meter to centimeter band tomography where simultaneous sensor sequences must be captured from different positions, orientations and times. No single wave-band is sufficient to capture all the information needed to reconstruct and predict the fire plume, so we will use different sensing modalities to develop an approach to tune filters from different wavelength bands and select the sub-bands that are useful for common wild-fires. (2) To reconstruct the fire plume and predict fire positioning simultaneously, we will use adaptive sampling. Going beyond current work, we will provide techniques based on distributed mixtures of Gaussian processes that are efficient, decentralized, consider energy constraints and operate with no assumption of knowledge of spatial correlations. (3) We will develop an approach that provides high resolution sensing with low-SWAP-C (Size, weight, power, and cost) through obscurants (smoke), and can incorporate perceptual uncertainty, motion uncertainty from onboard sensors, and fuse all sources of uncertainty and degradation into a risk map that will be used for risk aware safe motion planning and control for the first time. We will evaluate our integrated research approach and resulting system in simulation and in field experiments including prescribed burns operated by the wildland fire managers, where fires are deliberately started and monitored.?Broader Impacts:The technologies developed in the proposed work will enable UAS to operate in locations currently inaccessible due to visual obscurants and obstacles. The enhanced situational awareness will increase productivity and safety of firefighters. The integration of optimized sensor technology, fire plume prediction and resilient operation in smoke, including other visual degrada- tion, will have significant impact in wildland fire management and other disasters. Additionally, the proposed multi-robot deployment algorithms for efficient gathering of observations can be applied to many related problems such as environmental monitoring and search & rescue. The data and software resulting from the project will be publicly available through open source licenses. Additionally, the PIs will integrate the research results in their research and education activities via capstone projects, seminars and broaden the participation of underrepresented minorities with additional means, such as summer internships.
Animal Health Component
20%
Research Effort Categories
Basic
80%
Applied
20%
Developmental
0%
Goals / Objectives
Our research will address the following objectives:Objective 1: Integrated Multi-Wave-Band Tomography for Plume Modeling and Dynamics (Leads: Gkioulekas and Sycara)We aim to create robust methods to accurately and in real-timereconstruct the smoke plume. Reconstructing fluid flows, such as smoke plumes, is of great interest in computer vision, computer graphics, and many scientific fields, where a common methodology isvisible light tomographytechniques [9, 10, 11, 12, 13, 14, 15, 16]. Such techniques use cameras to acquire multi-view video sequences of the fluid, and reconstruct its spatio-temporally-varying optical density and refractive index. This tomographic reconstruction task remains challenging: It is a heterogeneous inverse scattering problem, where hundreds of thousands of unknown variables are coupled non-linearly, through the physics of multiple scattering and continuous refraction, in image measurements. Untangling these non-linear dependencies results in prohibitively high computational cost, or poor-quality reconstructions when there are few measurements [11]. Addi- tionally, given limited available sensors on the UAVs, it is important to identify optimal sensing positions for collecting information to update the model. This is a challengingadaptive samplingproblem, where one iterates between using current knowledge of plume parameters to select the next best set of measurements, and using new measurements to update plume parameter estimates. We will address these challenges by pursuing several research directions: (1) We will expand the sensing wavelength bands to include short-wave (SWIR), mid-wave (MWIR), and long-wave (LWIR) infra-red bands, as well as millimeter-wave and centimeter-wave radar. (2) We will develop new rendering algorithms to model the plume for all sensing wavelength bands. (3) We will integrate multi-wavelength information to enable near-real-time tomographic plume reconstruction. We will use the fact that, even though tomographic reconstruction is difficult in visible bands, it is easier in other wavelength bands (e.g, in MWIR, only single scattering needs to be considered [17]), albeit at the cost of reduced resolution. Fusing information from all modalities will enable us to accelerate reconstruction while maintaining high resolution. (4) We will combine the physical models for turbulent fluid flow [18] and radiative transfer [19], to efficiently simulate and infer both optics and dynamics of fire plumes. (5) We will develop novel sampling techniques that use the tomographic reconstruction outputs, and explicitly consider reconstruction uncertainty and temporal evolution of the plume, to determine optimal use of available sensing resources.Objective 2: Multi-UAS Coordination for Learning and Prediction of Fire Dynamics (Leads: Sycara and Scherer)A key challenge of environmental sensing and monitoring is that of sensing, modeling and predicting complex environmental phenomena which are typically characterized by spatially correlated measurements. To tackle this challenge many works have focused on multi- robot active sensing. A typical approach is to model the phenomenon as a Gaussian process, which allows its spatial correlational structure to be formally characterized and its predictive uncertainty to be formally quantified and subsequently exploited for guiding the robots to explore highly uncertain areas. While much work has been done in this area, firefighting presents many challenges that require new approaches. (1) The utility of the information at a sample point may vary over time. For example, where it may have been useful to get information about the fire plume direction and intensity at some particular sample location, if the wind changes the direction of the plume, this location may not be as informative. (2) As the plume moves the candidate sample points need to be recomputed in real-time. (3) Unlike most prior work that has used centralized estimation, the scale and constraints of our problem necessitate decentralized estimation of best sampling candidates and sampling times. (4) The robots may have heterogeneous sensing loads, and more than one robot must be present for accurate plume imaging, thus making this a spatio-temporal UAS coalition formation problem. (5) Delays due to battery charging may have significant undesirable effects. (6) Most crucially, navigation to the best estimated places is extremely challenging, due to smoke, other obscurants and other obstacles, thus necessitating novel risk aware navigation methods.We propose integrated adaptive spatio-temporal estimation of sampling locations with resilient path planning and navigationfor effective sample acquisition. We will pursue the following research directions:(a) use of distributed mixture of Gaussian processes for scalable estimation, (b) efficient methods for incorporating various constraints (time, energy), (d) coalition-based abstractions to capture the diversity of payload and sensor concentration for accurate estimation of the fire plume, and integration of those with safe and resilient navigation.Objective 3: Resilient Navigation Through Smoke and Particulates (Leads: Scherer and Gkioulekas)Given locations that need to be monitored and where samples of optical densities must be taken for fire plume reconstruction, the UAVs need to plan safe paths and navigate to those locations in the presence of significant perception uncertainty. Situational awareness at this level is invaluable to alert personnel on environmental degradation such as fallen debris in intersections that would impede locomotion. This objective requires addressing significant advances in computer vision and path planning. In particular, reliable, high-resolution sensing through obscurants (smoke) in a form factor that is low-SWaP (Size, weight, and power) will require a paradigm shift with respect to algorithms, as well as reliable estimates of the uncertainties both for the map. The approach will utilize a combination of long-wave infrared as well as visible light cameras, and RADAR, and leverage ideas from radiance fields to separate the plume from obstacles, combine it with explicit risk modelling, to plan risk aware paths. These paths will be able to incorporate sources of uncertainty and degradation into a multi-layer risk map. The risk map is then used to dynamically adapt from prior experiences to deliver safe, high performing maneuvers at execution. Based on these estimates we will develop a resilient approach for navigation that is able to explicitly model risk (e.g. conditional value at risk (CVaR)) and trade-off mission objectives with respect to flight safety going beyond standard methods for flight among obstacles. Towards this vision, we will pursue novel research in the following areas: (a) detection of thin obstacles such as wires in obscurants, (b) multi-layer risk mapping, and (c) risk aware safe motion planning and control.The three objectives are tightly integrated and the PIs will pursue them simultaneously and collaboratively. Adaptive sampling is important for determining appropriate locations to image in order to reconstruct an accurate and predictive fire plume. Safe navigation is crucial to enable the UASs to go to the appropriate locations to take the image samples. Accurate and predictive fire plume models are the sine qua non of monitoring for safe escape routes, firefighter positions, and future movements. We will demonstrate our research using simulators like AirSim and most significantly in field trials of deliberate and controlled burns.
Project Methods
To promote replication of the results of the experiments, we will prioritize replicability in our hardware, software, and system setups. The core setup of our system will be based around an open drone platform with enhancements for the proposed work to specific sensors (long-, and mid-wave infrared cameras). In order to encourage replicability of the results, we plan to use a team ofOpenResearch Drone( a public open drone design from PI Scherer's lab, see Fig. 12 and Facilities section for details) platforms as well as commercially available drones, e.g. Pelican for experimentation.Experiments and MetricsWe will validate the performance of objectives developed through rigorous evaluation individually, then integrate each component semi-annually to perform system level testing. Finally we will hold annual demos to invite feedback from stakeholders.Obj. 1 Integrated Multi-Wave-Band Tomography for Plume Modeling and DynamicsWe will evaluate our methods for reconstructing fire plume dynamics using (a) simulations of the plumes, (b) small campfires, and (c) fires from larger controlled burns. In simulations, we will render a variety of situations using our Physics-based Monte Carlo Rendering engine [25]2. We will compare our estimates to simulated ground truth plume density, refractive index, and velocity, using standard metrics such as root mean square error (RMSE). In real experiments, the voxel-wise densities and velocities are not possible to obtain as the smoke evolves quickly. We will instead evaluate the degradation in density/velocity estimates obtained with fewer sensors or samples. Further, we will predict the plume densities/velocities over time and render the plume to compare to the observed plume using common perceptual image metrics. Lastly, we will integrate the evaluations with those from the other two objectives to evaluate the overall task (escape route planning, hazard region estimation).Obj. 2 Multi-UAS Coordination for Learning and Prediction of Fire DynamicsThe evaluation of Objective 2 is two-fold. First, we must test how well our framework can learn the environmental phenomena of interest, capturing both latent temporal and latent spatial correlations. A common metric to use for measuring the quality of the model learned is to measure the RMSE between the true distribution and the learned model. We have used this metric before to evaluate environment modelling frameworks [52, 53, 49]. RMSE provides a deviation measure between the predicted posterior estimate and the ground truth. Another common metric is negative log-likelihood. Negative log-likelihood provides a similar type of metric measure to RMSE, but also incorporates predictive variance, which could be useful to capture during evaluation. The model should also balance exploring uncertain regions with exploiting high-value locations. This will be measured by a score or utility function that will provide a measure of how well the framework pursues an objective while simultaneously learning about the environment. This will also allow us to ensure the framework is not simply exhaustively sampling the environment. Second, we will evaluate computational efficiency, time to completion, and failure rates. We will perform these evaluations on publicly available data sets, such as the Intel Berkeley Data3and an open source dataset that contains spatio-temporal data (monthly average temperature readings across the whole world since 1948)4. Additionally, we will use data from PI Gkioulekas' physics-based Monte Carlo rendering simulator. Most crucially, we will use data from field experiments with controlled burns.Obj. 3 Resilient Navigation through Smoke and ParticulatesThe experimental tests will be conducted for both regular obstacles and our thin obstacle detection model. Results will be compared with other state-of-the-art methods, based on prior datasets [81, 66, 82], simulated datasets, and the data collected to be collected in this project. To evaluate the performance of our methods, we will consider detection as well as estimation error (RMSE). The detection metrics we will consider are similar to typical object detection and semantic segmentation metics: Pixel-wise metrics such as Average Precision (AP) [83, 84] and ROC [85, 86], and segmentation metrics such as IoU and mIOU [87, 88]. We will also evaluate the image transform model and the image discrimination model by visually checking and monitoring the discrimination rate as well as comparing the reconstruction loss compared to real-world collected datasets. For wire and unstructured thin obstacle detection, experiments and metrics are similar to previous efforts but with thermal images as additional input. After evaluating the obstacle detection modules we will evalute the risk-aware mapping and planning approach by testing how well we are able to reconstruct as well as capture uncertainty compared to the actual map uncertainty in simulated datasets quantitatively, and in real-life experiments qualitatively by comparing our approach to a state-of-the-art fusion methods. We will compare our planning approach to other state-of-the-art methods to observe if modelling and including risk models such as CVaR in the decision making can increase the resilience of the system. Overall we will consider the following metrics to assess the safety and performance of the approach: time to collision (s) and number of collisions (#); mission successful rate (%) under different visibility level as well as time to completion (s).Culminating ExperimentsAt the end of each year, we will demonstrate cumulative progress of system results. By this stage, the technical components of the system will have already been validated and integrated throughout the year. The purpose of these culminating experiments is to solicit stakeholder feedback, so as to keep the technology in line with their real world needs as well as gather data for comparison and evaluation in realistic experiments.1. In the first year, we will evaluate and demonstrate at prescribed fire sites in western Penn- sylvannia's Game Lands, in collaboration with the Pennsylvannia Game Commission (John Wakefield). This first year will focus on low-intensity prescribed burns and feature one to two drones. The primary focus of this demonstration will be to prove safety and resilience of the system in light smoke around scattered obstacles. We will showcase the initial data gathered, visualized, and predicted by the system.2. In the second year, we will demonstrate during training exercises with wildland firefighters with one of our collaborators (Sean Hendrix or Josh Wilkins) from Oregon and California. While this will still be a controlled fire, it begins to integrate and evaluate the approach with end users. We will also scale up to denser obstacles. This demonstration will feature two to three drones. The goal will be to validate the technology with actual wildfire fighters.3. In the third year will will demonstrate deployment with wildland fighter collaborators at an actual, small, wildfire site. This high intensity fire will create significantly more smoke than in previous demonstrations, and highlight our reconstruction of the fire plume. The system will feature three to four drones. The goal will be to showcase the epitome of our integrativeresearch demonstrating resilient autonomy, smoke dynamics modeling, and multi-robot coordination. We will demonstrate integrated stakeholder feedback from all three years, and invite further discussion for deploying the product in the real world.?