Source: NWB SENSORS INC. submitted to
INTELLIGENT MAPPING OF THE FARM USING LOW-COST, GPS-ENABLED CAMERAS DURING EXISTING FARM ACTIVITIES
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
COMPLETE
Funding Source
Reporting Frequency
Annual
Accession No.
1020351
Grant No.
2019-33610-30297
Cumulative Award Amt.
$650,000.00
Proposal No.
2019-02291
Multistate No.
(N/A)
Project Start Date
Sep 1, 2019
Project End Date
Dec 31, 2022
Grant Year
2019
Program Code
[8.12]- Small and Mid-Size Farms
Project Director
Nugent, P.
Recipient Organization
NWB SENSORS INC.
80555 GALLATIN RD
BOZEMAN,MT 59718
Performing Department
(N/A)
Non Technical Summary
This Phase II work is broken down into three main categories: hardware and integration (~30%), data collection (~25%), and algorithm development and data analysis (~45%). Much of this work will be happening in parallel as existing systems are updated and uses as next stage systems are under development. NWB Sensors, Inc. implements an iterative scientific / engineering approach on all our projects. This approach follows the steps of 1. model the problem or design the system, 2. Validate the model or system data by collecting real world data (testing), 3. Refine the model or system if desired performance was not achieved. Throughout the project engineering project management methods will be implemented. These methods include industry standard software tools for: collaborative project management; documentation; software version control, and backup of project documents.Development on the Improved camera controller will start in fall 2019 and continue into early spring 2020. At this time the camera controller will be updated to implement improved movement detection and camera control. During development these motion detection and camera control routines will be tested using automobiles in simulated field tests. Work during the early spring will validate these improvements in farm tests. Initial work on the Nvidia Jetson embedded machine vision platform will also start at this time with the goal of prototype field test during the on-farm experiments in summer 2020. Based on the results of these tests the prototype system will be improved upon during fall and winter 2020 to be ready for field tests in summer 2021.Development to Improve the data processing toolchain and to implement the End user data application will happen in parallel and will take place throughout the project starting in fall 2019. The first task of this effort will be to implement object tracking in the data processing tool chain. The illumination correction and improved anomaly detection will utilize the object tracking routines. As new imagery is added during the 2020 and 2021 summer these routines will be improved as necessary. Testing of the methods to communicate in field data will be developed with a prototype ready for summer 2020.Critical to the success of this project is the creation of tools to allow for System of learning for new environments. Development on the methods for assisted training using auto-sorted classes can start in fall 2019. This work can use object identification graphs trained on grain crops but being adapted to pulse crops. Validation of these techniques will use comparisons with existing human built graphs for pulse crop object identification. After the motion tracking has been integrated into the tool chain further development on automated new crop and object detection can take place.Work to Validate data processing and training tool chains will take place throughout the project using both newly collected and existing data. Validation will require close work with our co-operating growers and landowners to ensure the mapping data is accurate and in formats usable in their operations.
Animal Health Component
50%
Research Effort Categories
Basic
20%
Applied
50%
Developmental
30%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
4047210202050%
4025310114050%
Goals / Objectives
This Phase II project aims to develop Groundskeeper™ into a product ready for initial market entry. This will require readying the machine vison and data sharing software platform to enable the small-farmer to perform their own vision-based mapping. Questions that will be addressed to mature the Groundkeeper™ into a minimum realizable product include: What hardware to use? What is needed in the software processing toolchain? How does the system handle new crops or weeds that it has not learned? What data should be delivered to the farmers? Lastly, the system capabilities must be validated, and results reported to key agencies and collaborators.Answering these questions requires us to improve camera controller design, improve the object detection algorithm, embed the algorithm in the camera controller hardware, develop automated methods of learning new crops/weed species, field-test the algorithm on a variety of farms, and characterize and validate systems accuracies. The camera controller needs to be enhanced so that it can run the object detection and classification software and produces the resulting maps. Phase II will bring the system to a minimum realizable product ready to provide automated mapping and assist in developing and quantifying management strategies. This will require a minimum implementation of the data sharing platform to transmit the resulting maps from the in-field device to the grower, and to share imagery that cannot be classified properly to NWB who will be acting as the grower community during early development. These goals and associated objectives are broken down into the categories listed below.Goal: Improve the camera controller. This effort will develop a standardized embedded camera controller capable of processing imagery in real-time on the harvester or tractor. The device will transmit data to the end user and locally log data. Objects not identified with high confidence and quality control images will be transmitted for further processing. Objectives:Implement improved movement detection and camera controlMove to a fully capable embedded systemAdd cell modem and demonstrate transmission of in field dataGoal: Improving the data processing toolchain. This effort will develop the robust machine vision algorithm successful product, with specific effort focused on the reduction of false positives. Even a small false positive rate in the clean crop can overwhelm the detection maps. Objectives:Implement improved illumination and color correctionsImplement improved anomaly detectionImplement object tracking and automated region of interest detectionGoal: End user data application. This portion of the effort will develop an application to allow viewing of processed imagery, detection maps, and perform additional processing on a desktop computer. Objectives:Implement generation of full field visual mapsImplement initial platform for data sharingIdentify 3rd party integrator for output maps for early deploymentGoal: Adapt system for learning new environments. To make Groundskeeper™ a viable product the software needs to be capable of handling new crops and weeds that it has not learned. This effort will enable large-scale adoption necessary for a successful product, through implementation of a largely automated learning mode on the system. In this mode the data will be aggregated in such a way as to determine the image classes that are present in the area. Class labeling and validation will require human input, but at a greatly reduced level. Objectives:Implement automatic detection of new crops and objectsImplement assisted training of classifierGoal Validate data processing and training chains. The image database collected in Phase I will be grown in Phase II to include more crops and more activities, this larger dataset will be used to accurately test the system's learning characteristics. New data collection efforts will include distinct sets of data with good spatial positioning data spanning new crops and new weeds. Objectives:Conduct on farm data collection efforts with growersValidate using newly collected data from a new farm
Project Methods
This Phase II work is broken down into three main categories: hardware and integration (~30%), data collection (~25%), and algorithm development and data analysis (~45%). Much of this work will be happening in parallel as existing systems are updated and uses as next stage systems are under development. NWB Sensors, Inc. implements an iterative scientific / engineering approach on all our projects. This approach follows the steps of model the problem or design the system, 2. Validate the model or system data by collecting real world data (testing), 3. Refine the model or system if desired performance was not achieved. Throughout the project engineering project management methods will be implemented. These methods will industry standard software tools for: collaborative project management; documentation; software version control, and backup of project critical documents.Development on the Improved camera controller with start in fall 2019 an continue into early spring 2020. At this time the camera controller will be updated to implement improved movement detection and camera control. During development these updates will be tested using automobiles in simulated field tests. Work during the early spring will validate these improvements in on farm tests. Initial work on the Nvidia Jetson embedded machine vision platform will also start at this time with the goal of a prototype field tests during the on-farm experiments in summer 2020. Based on the results of these tests the prototype system will be improved upon during fall and winter 2020 to be ready for field tests in summer 2021.Development to Improve the data processing toolchain and to implement the End user data application will happen in parallel and will take place throughout the project starting in fall 2019. The first task of this effort will be to implement object tracking in the data processing tool chain. The illumination correction and improved anomaly detection will utilize the object tracking routines. As new imagery is added during the 2020 and 2021 summer these routines will be improved as necessary. Testing of the methods to communicate in field data will be developed with a prototype ready for summer 2020.Critical to the success of this project is the creation of tools to allow for System of learning for new environments. Development on the methods for assisted training using auto-sorted classes can start in fall 2019. This work can use object identification graphs trained on grain crops, but being adapted to pulse crops. Validation of these techniques will use comparisons with existing human built graphs for pulse crop object identification. After the motion tracking has been integrated into the tool chain further development on automated new crop and object detection can take place.Work to Validate data processing and training tool chains will take place throughout the project using both newly collected and existing data. Validation will require close work with our co-operating growers and land owners to ensure the mapping data is accurate and in formats usable in their operations.

Progress 09/01/19 to 12/31/22

Outputs
Target Audience:Our target audience is the small farmer. Due to our ties to small farms in Montana, we primarily work directly with farms in Montana, including one organic producer for summer 2020. These data collection activities have us working directly with our target audience. Changes/Problems:Discontinued Hardware This project started with the aim of using action cameras such as the Garmin VIRB or GoPro Hero. We selected the VIRB due to its open application programming interface (API) definition, which allowed third-party software to interface with the camera. In late 2020 Garmin discontinued the VIRB camera. The GoPro Hero was considered, but at the time required a private API, and our application was not approved. A public API for the GoPro cameras was released by 2022, but alternatives had already been selected in the project. To address the camera issue, we switched to a pair of USB3 cameras directly connected to the NVIDIA Jetson. These cameras were first used in the 2021 harvest, where the initial quality from these cameras was low due to the cameras' high dynamic range exposure algorithms. This was corrected by a change in the control software during the harvest, but it prevented quality data from half of the 2021 harvest. The hardware and camera systems were robust by the 2022 season; however, during direct sunlight, we still faced issues with overexposure. Extreme Drought / Climate The Western United States was in extreme drought during the summer of 2021 and 2022. The drought impacted Montana producers in ways that created problems for this project. The harvest was spread out much longer than normal, with some farms still harvesting into October, requiring the project to extend into these periods. This resulted in atypical crop stands and weed growth stages. These thin crop stands caused issues with the semantic segmentation system used to detect the cropped region in the imagery. This led to semantic segmentation being dropped halfway through the 2021 harvest; rather, the whole image was processed. Despite these changes, the atypical crop stands still proved challenging for our detection system. An example is a thin stand of lentils being classified as fallow. This was reported by a producer who called us to joke that our camera was insulting him. When asked what his yield monitor was reporting, he wouldn't say but stated he'd need only one digit. These years provided important edge case data for the classification system but did little to prove accuracy in a typical year. Limitation of Rural Internet Infrastructure The other challenge we faced throughout the project was the limited internet speeds in rural areas. Our software produces approximately 1TB / year for a small farm. Storing this data locally on the farm is impractical for the end user, but uploading to a cloud platform is similarly impractical. For example, a typical rural internet upload speeds range from 3 Mbs to 10 Mbs. At 3 Mbs, 1TB would take 33 days to upload, and at 10 Mbs, it would take ten days; neither of these upload times is practical. In addition to low speeds, the sparse cellular connectivity in rural areas limited the real-time data connection for our systems. The data streamed from instruments was limited to detection summaries, with full images stored on the instrument and only transmitted when required to debug mis-performing instruments. These limitations to rural connectivity have resulted in a severe (but temporary) limitation on the feasibility of the systems we developed in this program. What opportunities for training and professional development has the project provided?This project has been a valuable opportunity for professional development and training for NWB Sensors and three interns from Montana State University. The interns gained practical skills in electrical design, software development, project management, and weed identification through this project. They also received field training to enhance their knowledge of crop identification and farming practices. Furthermore, they learned how to use the Python programming language, machine vision, data classification, and XML data structures. The interns reported that this experience improved their time management skills and confidence in working with the public, as they interacted and coordinated with producers during harvest data collection. The interns have successfully graduated and pursued their career goals, partly thanks to the skills they acquired through this project. One of them secured a competitive job position with a local data analysis company, another one chose a career path in natural resource management, and the last one joined NWB Sensors as a software engineer but has since moved to another employer. NWB Sensors also benefited from this project, as it challenged us to improve our coding practices and project management. We established documented coding standards. We also adapted our project management system to include hardware tracking, which enabled us to monitor our crop imaging systems' installations and track any issues encountered during operation. The software standards and the hardware tracking system have been applied to all our ongoing projects. How have the results been disseminated to communities of interest?NWB Sensors worked with grain producers and Montana agriculture researchers throughout this project. The main source of interaction with producers has been through installing our camera systems on producers' equipment in the fields where we worked. These installs led to direct interactions with these producers and their support staff. We also presented our results to various Montana agriculture research community stakeholders and demonstrated our System to students at Montana State University. These interactions have helped us better understand their needs and how our systems can improve their farming practices. These activities also produced the imagery used to train our machine vision platform. In 2020, we obtained high-quality data and field coverage and disseminated the maps to the producers after the harvest. The maps showed useful insights for management and decision-making. In 2021, we faced some challenges with the data quality and could not disseminate the maps to the producers. Instead, we updated the maps of previous seasons with the latest detection models and confirmed that they matched the producer's expectations. In the final harvest of this project in 2022, we deployed four prototype camera systems. These systems achieved real-time data processing and availability for the cooperating producers. These data enabled the producers to observe and compare the camera operation with their observations. We aimed to integrate weed and crop stand assessment maps with their yield maps; however, potentially, due to low yields during the drought, producers have been reluctant to provide their yield data. In 2022, we also identified multiple potential licensees for our System and provided summaries of this System and its capabilities. We have been targeting partners with a strong presence in the market and a compatible product line. We continue working with these potential partners to showcase our System to license this product. NWB Sensors maintains a website that explains this project's main goals and methods, and we are updating it to highlight our achievements from Phase II of the project. What do you plan to do during the next reporting period to accomplish the goals? Nothing Reported

Impacts
What was accomplished under these goals? This project has developed a robust machine-vision camera to bring precision agriculture to farms of any size. This tool combines a camera, embedded computer, and GPS, all mounted inside the vehicle's cab. Images are then processed using computer vision software that identifies and classifies objects of interest, such as weeds and cropping problems resulting in maps of detected objects. The resulting map of cropping problems can be used to identify fields of field zones that may require further investigative action. These actions could be soil testing to determine why the crop stand is always poor in an area, site-specific weed management, crop rotations to enable new herbicide chemistries or other cropping system changes. Goal 1: Improve the camera controller The final camera controller is an Internet of Things (IoT) enabled edge computing device based on the Nvidia Jetson Xavier NX, an embedded Linux computer with an integrated GPU for machine vision. Two USB3 cameras connected to the NVIDIA Jetson collect imagery, each covering half of the field of view. Imagery is classified in the field and stored using lossless video compression for up to 300 hours of storage. Position, time, heading, and speed are provided by a u-blox LEA-6S L1/L2 GPS. This device has proven to be capable of operating in a dirty unstable environment in the cab of combine harvesters and other agricultural vehicles. This robust device can provide real-time classifications on the Nvidia Jetson with real-time communication to our web-based platform. Goal 2: Improving data processing toolchain. The main thrust of machine learning on this project is training a convolutional neural network (CNN) classifier that can distinguish between crops, human-made objects, and weeds. This task requires the CNN to have good generalization. In Phase I Inception-V3 [1] showed good results, but in Phase II we transitioned to Xception [2] to provide better generalization. Furthermore, a pruning technique, where a subset of the connections is turned off during each training step, is used to avoid overfitting. After pruning, the weights are refined, leading to an overall accuracy of ~98% and good generalization. Each image collected by the USB cameras are broken into 299x299 pixel tiles, and each tile is classified. This method has made accurate training very critical. A multi-label Xception based CNN is used to address the possibility of more than one object of interest within the tiles. The final layers of this CNN have been highly modified to support the multi-class labeling to assign classification labels for all the objects in the image. These labels include specific object labels, such as wild oats or wheat, and non-specific labels, such as crop or weed. This multi-label network has demonstrated superior performance in complex scenes, and can provide real-time classifications on the Nvidia Jetson. During the latter half of this project, we carefully reworked the databases used to train our multi-label CNN. This initially led to a decrease in the number of images in our network as we ensured low overlap between training images and a greater diversity of originating or parent images. This database now consists of 217,018 labeled tiles from 48,478 images. The tiles are 299x299 pixels and contain 942,287 objects from 224 classes, such as crops, weeds, and machinery. The database also tracks each tile's source image, location, camera, and time. Of the 224 classes, only 50 classes are used for model training. Some classes are dropped, and some are combined due to low separability. For example, kochia, Russian thistle, and pigweed tumbleweed are combined into tumbleweed. The database is divided into training (92%) and testing (8%) sets, with no overlap of source images. Goal 3: End-user data application This portion of the effort developed an application to allow the viewing of processed imagery and detection maps on a producer's device. In 2020 NWB Sensors, Inc. acquired Weblink Sensors, LLC, and integrated their IoT platform and online database into this project. This platform allowed for real-time transmission of infield detections to a centralized server. Real-time maps of detected objects, vehicle positions, and other metadata are generated using the WebLink Sensors platform. Limitations with rural internet connectivity limited data to object detections and metadata, but not images. Images were saved onboard the vehicle and collected by NWB employees through in-person visits during harvest or afterward. Once images are collected, a customized QGIS connects the detection database with the raw imagery. Detections are displayed as a map, and a user can click anywhere on the map to view the closest image or video frame. The imagery, labels, and detections are organized through SQLite databases. These databases track the storage location, geolocation, assigned labels, object detections, date, crop, farming activity, and camera of the data collection. Goal 4: Adapt the System for learning new environments Neural networks are unable to generalize beyond their training data. This shortfall is commonly addressed by increasing the breadth of the training data to encompass either more classes or a greater variety of present classes. This modification fails to solve the underlying problem that the networks are a closed set; every image must be classified into one of the available classes, as is the nature of SoftMAX classifications. This was addressed by moving to a multi-label classifier. Here the multi-label network provided a low confidence score for all classes when encountering unknown objects. Multi-label also allowed for the training of generic object classes such as "weed" or "crop," which allowed objects to be identified categorically when not contained within the training data. Unknown images are collected until a signification number exist (~1000 or more), then labeling occurs through unsupervised classification. This process uses a modified Xception network for feature extraction, dimensional reduction through UMAP [4] or t-SNE [5], and clustering with hdbscan [6]. The resulting clusters are observed by humans, cleaned of incorrect data, and given a classification, or discarded if unnecessary data. While the unsupervised clustering could occur on the Nvidia Jetson, giving labels to these clusters requires human interaction that is not practical in the field. Therefore, the process of learning new classes or adaption to new crops occurs once the data have been transferred to our servers. Goal 5: Validate data processing and training chains. This effort aimed to increase the size of the current image data set to test the System learning new characteristics spanning new crops and weeds. We worked with additional growers to map their fields during the 2020, 2021, and 2022 growing seasons to achieve this goal. Through these deployments, we have worked with seven producers covering a range of crops, issues, cropping practices, and climate. We worked with these collaborative producers to understand and adapt to the management practices they use in their fields and other specifics of their farms. This effort has increased the current image data set to over 5.2 million frames (both still images and video frames), and tested our ability to adapt to new crops and cropping systems. This has required learning new characteristics spanning both new crops, new weeds, and new cropping problems. Fully new crops obtained through this effort include flax and sugar beets. New cropping problems have included very thin stands due to drought throughout the crops, varied emergence causing immature crops at harvest, and sawflies in wheat. These new crops and cropping issues have provided us with the data required to fine-tune and test our data processing, network training, and management software.

Publications


    Progress 09/01/21 to 08/31/22

    Outputs
    Target Audience:During the summer of 2022, our NWB Sensors continued to deploy our systems on farms as part of our ongoing efforts to engage with the agricultural community and prove the reliability of these systems. Four prototype camera systems that provide real-time processing and IoT connectivity were deployed on four farms. We are working with these farms to incorporate weed and crop stand assessment maps with their yield maps. Additionally, we have continued presenting some of the results of our on-farm deployments to researchers at Montana State University College of Agriculture, the Montana Agriculture Experiments Stations, and other members of the Agriculture Research Community in Montana. Additionally, we had an opportunity to demonstrate our system to agriculture and education students. In the previous year, NWB Sensors had initiated an effort to reach out to potential licensees for the system we are developing, in 2022 this effort came to fruition. We identified a potential partner who is well-established in the market and for whom the Groundskeeper system would complement their existing product line. We have continued efforts to reach other potential partners and continue to have discussions as we demonstrate our system during the 2022 harvest. Changes/Problems:The extreme drought in the Western United States continued through the summer of 2022. This continued to produce a-typical crop stands and challenges in accurate detections. An example of this is an extremely thin stand of lentils being classified as fallow by the camera system. These types of years provide very important data as it provides very important edge case data for the classification systems. This drought also led to a highly variable harvest date for many crops due to variable emergence of winter wheat or even replanting of winter wheat to spring wheat due to low stand density. This resulted in an extended harvest for 2022, due to this, we requested an extension to the project that was granted, extending the project closing date to December 31, 2022. What opportunities for training and professional development has the project provided?Over the first half of the past year, this project has provided on-the-job training for one intern from Montana State University in electrical design, software development, and neural network training. In spring 2022, this student graduated and landed a very competitive job position with a local data analysis company. This was partially enabled by the skills he acquired through on-the-job training on this project. How have the results been disseminated to communities of interest?NWB Sensors continued to work with four farms through the deployment of our prototype camera systems. The data from the 2021 harvest was of insufficient quality to be provided to these collaborators. During the 2022 harvest realtime processed data was available for viewing by the operator where desired. This allowed the operators to observe the camera operation and compare with their observations of the fields. We presented our results to researchers and demonstrated our system to students, and continued to reach out to potential partners during 2022. What do you plan to do during the next reporting period to accomplish the goals?During this next and final reporting period, we will focus on developing maps based on the data collected during the 2022 harvest. Demonstration of the real-time mapping of the system and the ability to overlay detected problem patches of weeds or crop stand with farmer-produced yield maps will prove the ability of this system to provide needed insights into production problems.

    Impacts
    What was accomplished under these goals? Goal 1: Improve the camera controller. During the 2021 deployment, the cameras we had selected were producing low-quality imagery due to the camera's exposure setting and internal processing leading to blurring and ghosting in the images. Late in the harvest of 2021, an optimal setting was determined for these cameras. These settings are in use for the 2021 harvest. Goal 2: Improving the data processing toolchain Over the past year, we have continued to develop our Neural Network Model, which forms the core of our object detection system. Our focus has been on advancing the network's ability to assign multiple labels per scene, such as labeling a weed in a thin stand of wheat with multiple labels including "wheat," "thin," "crop," "weed," and the specific weed type. This multi-label network has demonstrated superior performance in complex scenes compared to our single-label network. We have recently developed a new multi-label network capable of operating in wheat, barley, oats, fallow, canola, mustard, field peas, chickpeas, and lentils. Experimental operation is also possible in fava beans, corn, and along roadways. This latest version of the multi-label network has improved performance in thin crops, which have been particularly challenging during the droughts of 2021 and 2022. Although the latest version of the multi-label network was not deployed during the 2022 harvest, we plan to use it post-harvest to reprocess the data after harvest. Over the past year, we have also carefully reworked the databases used to train our Neural Network Models. This effort has resulted in the expansion of our main database to include 307,876 labeled objects, an increase of 116,964 objects. However, this total change is greater as we removed objects from the previous database that were too similar to existing examples within their respective class. We also completely reworked the validation database. It has been modified so that labeled objects no longer include images originating from the same parent image as the training database. This ensures the two datasets are truly independent. Goal 4: Adapt system for learning new environments During 2022 we tested and began routinely using the software we developed to adapt our cameras to adapt to new unknown imagery. This software is based on unsupervised classification techniques and will continue to see further testing in 2022. Goal 5: Validate data processing and training chains. Further testing and validation of these techniques is continuing during the 2022 harvest.

    Publications


      Progress 09/01/20 to 08/31/21

      Outputs
      Target Audience:During the summer 2021 growing season on farm deployments of our systems continued. This effort continued to keep us engaged with the agricultural community. During the 2021 summer, we also had the opportunity to present some of the results of our on-farm deployments to researchers at Montana State University College of Agriculture, Montana Agriculture Experiments Stations, and other members of the Agriculture Research Community in Montana. During the previous year, NWB Sensors started an initiative to reach out to potential licensees of the system we are developing. The intent of this effort is to identify a partner who is already established in the market and for whom the Groundskeeper system compliments their existing product line. We have established connections with potential partners and continue discussions as we prepare for a full system demonstration in summer 2022. Changes/Problems:As stated in the previous report Garmin had discontinued the VIRB camera, which had been the core "action camera" component of our system. Due to this change, it was selected that we directly connect a USB3 camera to the Jetson. 2021 was also the first year using this USB3 camera system. The initial quality from these cameras was low due to the camera's high dynamic range exposure algorithms. This was corrected via a software change in the camera control software during the harvest, but it prevented quality data from being collected for approximately half of the harvest season. The drought in 2021 that impacted Montana producers also created problems for this project. The harvest was spread out much longer than normal with some farms still harvesting into October, requiring the project to extend into these periods. The drought also negatively impacted crop stand. The resulting thin stands caused issues with the system not accurately detecting the cropped region in the imagery. This was addressed mid-season through software changes, but it negatively impacted the quality of the processed data. The hardware and cameras systems are robust going into 2022, but there are some concerns with the 2022 growing season facing a drought similar to 2021. What opportunities for training and professional development has the project provided?During the past year, this project has provided on-the-job training for one intern from Montana State University. This training has included knowledge of electrical design, software development, and neural network training. In addition to this on-the-job training, continued in the field training has taken place to assist employees with crop identification and help those unfamiliar with agriculture to have a better understanding of farming practices. How have the results been disseminated to communities of interest?The data collected during the summer 2021 harvest was of insufficient quality to disseminate to producers. Updated maps of data collected in prior seasons were disseminated to end-users. These maps used the latest detection models, and showed trends that were consistent with producer expectations. What do you plan to do during the next reporting period to accomplish the goals?The main effort that remains is two-fold. The first is real-time object detection and map generation. The second is the toolset for collecting unknown objects, sorting these objects, and generating new object classes. Both of these efforts will be addressed during the summer 2022 growing season. Object detection and map generation The tools, hardware, and procedures are in place to enable real-time mapping of objects detected in the field. In preparation for this demonstration, a finalized neural network is being implemented. This will be ready and fully tested for deployment in the 2022 harvest. The main risk to this effort is the potential for continued drought leading to delayed spring planting. If this occurs, a delayed harvest may result. Adapting to Unkown objects The individual tools for this process are in place. The main effort is to refine these into a true toolchain and demonstrate that they can take a partially trained network and adapt it to operate in a new crop. These partially trained networks are able to identify many of the objects in the imagery but are not able to reliably classify the objects within the crop. This will be demonstrated using the fava bean, sugar beat, and flax data that has been collected. These are three relatively different crops that will show the potential of our system to identify when its operating in an unknown crop, and will minimal human interaction produce a new versions that is adapted to the new crop.

      Impacts
      What was accomplished under these goals? This project has produced an easy-to-use machine-vision camera prototype to provide end-of-season quality control to crop production. The tool is a combination of a camera, embedded computer, and GPS, all mounted inside the combine's cab that collects images during the harvest. The images are processed to detect crop stand issues (thin, lodged, bare ground, etc), weeds, and other objects of interest. The result is a map of these detected problems that can identify the field's problem regions, which may require further action. These actions could be soil testing to determine why the crop stand is always poor in an area, site-specific weed management, crop rotations to enable new herbicide chemistries, or other cropping system changes. Goal 1: Improve the camera controller During the 2021 harvest, the first fully operational version of these systems was deployed. In these devices, data were collected and classified in the field. Data were archived onto onboard storage. The detected objects of interest were then transmitted to a cloud-based server via a cellular connection. Real-time mapping of detected objects did not occur during 2021; however, state of health and harvester position was mapped. This allowed for testing of our data telemetry system. Data are transmitted when a cellular connection is available and, if not available, are logged and transmitted when service becomes available. This operated as planned during Summer 2021. Supply chain issues due to the COVID-19 pandemic limited the optical lenses available for the cameras deployed in 2021. Due to this problem, sub-optimal lenses were deployed. These lenses led to data quality problems caused by vignetting and decreased image quality near the edge of the field of view. Problems were also encountered due to the internal processing in the high dynamic range settings for the deployed cameras. These algorithms led to artifacts in the imagery and decreased texture on the plants. Both these issues have been addressed in the version to be deployed during the summer 2022 harvest. The optimal lenses became available and have been integrated into the systems. The cameras' control software has also been updated to use a different exposure setting, and image quality is where it should be. This version has gone extensive testing and is ready for deployment. Goal 2: Improving data processing toolchain The drought conditions faced by Montana in 2021 led to abnormal crop stands and harvest dates. The thin stands identified limitations of the previous approach that utilized a dual network classification system. The first layer whose purpose was to identify crop / non-crop / and abnormal objects, failed to identify the cropped region in the imagery. Previously, this layer identified cropped regions and crop abnormalities, which were further classified to identify crop stand issues and weeds and species. In the new method, the center region of the image representing the crop below the horizon and above the header is broken into 299x299 pixel tiles. Each of these tiles is passed through the classification processor. This method has made accurate training very critical. It has also added the complexity that more than one object may exist within the imagery. These issues have been addressed by moving to a convolutional neural network (CNN) that is trained to assign multiple classification labels for all the objects in the image. This multi-label approach still uses the Xception classification network at its core, but the final layers have been highly modified to support the multi-class labeling. The multi-class labels assign specific object labels, such as weed-wild oats of clean_crop-wheat, and non-specific labels such as crop, weed, sky, etc. This method has proven to be robust and capable of providing real-time classifications on the Nvidia Jetson. Goal 3: End-user data application The MongoDB and HoloVIZ mapping system reported previously failed to produce a reliable solution, and therefore this work was suspended. In its place, a solution based on a customized QGIS build was implemented as an end-user application. This application connects to the database and raw imagery that was collected in the field. Detections of interest are displayed as a map of the field, and a user can click anywhere on the map, and the closest image collected video frame from in the field is pulled up for viewing. The user can also step forward and backward in time. This allows detections to be validated and other regions of interest to be viewed. The online application for real-time map generation is developed and ready. This will not be as full-featured as the QGIS maps. Detected objects will be mapped, but full video frames cannot be reliably transmitted from the limited cellular connections present in many fields. These maps will be available in real-time for producers in the summer 2021. Goal 4: Adapt the system for learning new environments The multi-label classifier has limited the ability to apply the OpenMax algorithm to our data. In the current implementation, it is impossible to distinguish between an image with object A and an unknown object B from an image with just object A. Unknown object detection now relies on images where no confidence is given to any of the known classes. Or images where only a non-specific general label such as weed, crop, manmade, etc. can be applied. These unknown images are then marked within the database and collected during further learning processing. During these learning steps, the unknown images are gathered and passed through one or more networks that are trained on other object classes to produce a bottleneck vector. Once many of these vectors for unknown objects exist (1000 or more), an auto sorting algorithm builds clusters and sorts the images into new suspected classes. These classes are then viewed and labeled by a human expert. They are also cleaned as necessary. This process leads to new sorted sets of objects which are then used in future network training. Goal 5: Validate data processing and training chains. This effort aims to increase the size of the current image data set to test the system learning new characteristics spanning new crops and new weeds. We added new crops during the 2020 and 2021 growing seasons to achieve this goal. We are currently testing our procedures to move from an untrained or partially trained network to a fully trained network in the new crops. Currently, three crops have not been excluded from the main classification network: flax, fava beans, and sugar beets, and are being used as the basis for this work.

      Publications


        Progress 09/01/19 to 08/31/20

        Outputs
        Target Audience:Our target audience is the small farmer. Due to our ties back to small farms in Montana, we are primarily working directly with farms in Montana, including one organic producer for summer 2020. We are also working with a farm in the mid-west to produce a dataset of region crops and weeds. These data collection activities have us working directly with our target audience. Changes/Problems:Garmin has discontinued the VIRB camera, which has us currently seeking camera alternatives. The GoPro action camera was considered, but it does not provide a public API, and our application has not been accepted into their developer program. Therefore, we are exploring alternatives such as using a camera directly connected to the Jetson. Using a USB camera would lose the accelerometer and GPS. To address this, we are considering using the GPS receiver we have been developing for another project in this system. This problem should be solved early this fall and will provide sufficient time to integrate the new camera into the embedded platform. What opportunities for training and professional development has the project provided?Training activities This project has provided on the job training for three interns from Montana State University. This training has included knowledge of electrical design, software development, project management, and weed identification. In addition to this on-the-job training, additional in the field training has taken place to assist employees with crop identification and help those unfamiliar with agriculture to have a better understanding of farming practices. Professional development This project has led professional development for many of the employees at NWB Sensors. This project has pushed NWB Sensors to establish documented coding practices and implement these in all ongoing projects. We have also adapted our project management to allow hardware tracking; this has enabled us to track individually installations of our crop imaging systems and any issues that arise during operation. How have the results been disseminated to communities of interest?NWB Sensors continues to maintain a website that outlays the high-level concept for this project. Thus far, this website does not include the accomplishments in Phase II of the project. We have distributed maps resulting from our processing to our collaborating growers and plan to increase this effort after the 2020 harvest. The only other venue of result distribution has been a limited presentation of imagery to professors and graduate students at Montana State University when we have needed help identifying weeds or other cropping problems. Less than 100 of our 150,000 labeled images have been shared in this manner. What do you plan to do during the next reporting period to accomplish the goals?The effort over the next reporting period will work toward the development of the early-stage commercial system. This system will enable near real-time processing of collected data and distribution of data to grower via a cellular interface. To achieve this work must conduct the following tasks: The processing algorithm must be finalized and optimized to be fast enough to provide in cap processing on the farm equipment. Utilize the OpenMAX algorithm to develop a system of detecting unknown objects and then clustering of these objects into probable classes, and provide assisted labeling. The Jetson platform must be readied for in-cab operation. The operation will require a fully integrated algorithm on the Jetson Platform, and for supporting power and communication hardware. Develop the image database system to be ready for real-time transmitted data.

        Impacts
        What was accomplished under these goals? This project is developing an easy to use machine-vision camera to bring precision agriculture to farms of any size. This tool is a combination of a camera, embedded computer, and GPS, all mounted inside the cab of the vehicle. Images are then processed using computer vision software that identifies and classifies objects of interest, such as weeds and cropping problems resulting in maps of detected objects. The target application is large acre crops such as cereal grains such as wheat and barley, and pulse crops such as lentils and garbanzo beans. In these large fields keeping track of issues can be problematic, and this is an area of agriculture where our systems have a substantial impact. The foundation of this system is the database of real-world images collected on the farm during harvest and other farming activities. These images contain the lighting conditions, dust, dirty windows, and muted colors present in real-world data and allow us to develop our systems using images containing these problems. During this Phase II effort, these data have continued to grow. The data set contains over 1.5 million still images and another 1 million equivalent images stored as video, and the current 2020 season will add another 1.5 million images. A subset of images are manually labeled and used as training data, which contains over 5,500 in scene objects and 185,000 single-object scenes. Increasing the number of images in our data and training sets increases the performance of the network. The following sections outline the progress toward the Phase II objects. Goal 1: Improve the camera controller Our camera system utilizes a Garmin VIRB GPS-enabled action camera and a camera controller to capture imagery in the field. The current camera controller is based on an embedded Linux computer. This camera controller sets the data collection rate based on ground speed to ensure sufficient image overlap. During operation, the controller downloads images from the camera in real-time and can store up to 150 hours of data. This increase is an improvement over the previous use of on-camera memory limited to 20 hours of operation. The camera controller is being transitioned to the Nvidia Jetson. The current camera control software is tested, and image classification algorithms are under test on an Nvidia Jetson Tx2 development board. Work this winter will determine which Jetson model will provide sufficient processing overhead for our algorithm and future enhancements. Goal 2: Improving data processing toolchain. Over the past year, our multi-step data processing system has seen significant improvement. In this process, a history of images is used to identify atypical regions in the current image. These regions are then classified and assigned a class and a confidence score using a trained Convolutional Neural Network (CNN). The variability of lighting conditions in real on-farm imagery is a challenge to accurate classification by the CNN. The CNN is trained to accurately classify objects with varying lighting by ensuring the training set contains crops, weeds, and other objects representing the propper range of lighting conditions, including sunny, cloudy, dawn, and dusk. During anomaly detection, a real-time lighting correction prevents illumination changes from being classified as anomalies. With these two approaches, current analysis shows no significant lighting dependent biases, except during periods of low contrast. A third method is under development that will use object tracking and histogram matching to identify and further correct for lighting changes. The main thrust of machine learning on this project is the training of classifiers that can distinguish not only between crops, human-made objects, and weeds but also between various weed species and the types of human-made objects. This task requires the CNN to have good generalization, or its ability to classify new data correctly. Phase I work used Inception-V3 [1] and showed good results; in Phase II we transitioned to Xception [2] for better generalization. To further support generalization, care is taken during training to prevent overfitting. To avoid overfitting, we use a pruning technique, where a subset of the connections is turned off during each training step. After pruning, the weights are refined, leading to an overall accuracy of ~98%, and good generalization. Goal 3: End-user data application Work has begun toward an end-user application utilizing a core of MongoDB on Mongo Atlas as a database for image information. We are moving data labeling, image series tracking, and data organization to this platform. Data in the resulting detections and maps are stored in the GeoJSON [3]. Many farm data companies and mapping companies utilize software compatible with GeoJSON; therefore, this format provides a strategic advantage. The database tracks 5x5 meter nodes on the Earth's surface, which belong to land boundaries such as fields or civil borders. When a problem is detected in an image, the software locates the associated node from the image GPS coordinates. Data are visualized with the HoloViz, and maps are sent as HTML files to be viewed on a web browser. We aim for the infield live demo of this platform during summer 2021. In addition to maps of detected objects, this project aims to build very high-resolution images using map tiles constructed from imagery collected from the farm vehicle. As part of our standard processing, the collected images are now corrected for barrel distortion and perspective distortion using camera distortion coefficients. Goal 4: Adapt the system for learning new environments Neural networks are unable to generalize beyond their training data. This shortfall is commonly addressed by increasing the breadth of the training data to encompass either more classes or a greater variety of present classes. This modification fails to solve the underlying problem that the networks are a closed set; every image MUST be classified into one of the available classes, as is the nature of SoftMAX classifications. The OpenMAX algorithm is an extension to a CNN that will classify objects into known classes and unknown classes using the raw activation vectors from the CNN. These unknown objects analyzed for similarity to determine if they may belong to a new class. This class could be added to the classifier without retraining the entire. These classes would still require human validation and naming. We are currently building up data sets using a crop of fava beans with its unique features and unique weeds to test using OpenMAX to create new classes. OpenMAX is processing intense, and we are working on methods of representing the activation vectors to increase the speed of this process. Goal 5: Validate data processing and training chains. The goal of this effort is to increase the size of the current image data set to test the system learning new characteristics spanning new crops and new weeds. To achieve this goal, we are working with additional growers to map their fields during the 2020 growing season. Currently, seven systems are deployed, with five growers covering a range of crops, issues, and cropping practices. We are working with our collaborative growers to understand the management practices they use in their fields and other specifics of their farms. This effort will continue through September and should represent the most extensive data collection effort in this project to date. References [1] X. Xia, C. Xu, and B. Nan, "Inception-v3 for flower classification," in 2017 2nd International Conference on Image, Vision and Computing, ICIVC 2017, 2017, pp. 783-787. [2] F. Chollet, "Xception: Deep learning with depthwise separable convolutions," Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 1800-1807, 2017. [3] H. Butler, M. Daly, A. Doyle, S. Gillies, S. Hagen, and T. Schaub, "The GeoJSON Format," 2016.

        Publications