Source: AGROFOCAL TECHNOLOGIES, INC submitted to NRP
REAL-TIME CROP MONITORING SYSTEM MOUNTABLE ON ANY FARM VEHICLE
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
ACTIVE
Funding Source
Reporting Frequency
Annual
Accession No.
1031144
Grant No.
2023-33610-40833
Cumulative Award Amt.
$650,000.00
Proposal No.
2023-03939
Multistate No.
(N/A)
Project Start Date
Sep 1, 2023
Project End Date
Aug 31, 2025
Grant Year
2023
Program Code
[8.13]- Plant Production and Protection-Engineering
Recipient Organization
AGROFOCAL TECHNOLOGIES, INC
3598 COUR DE JEUNE
SAN JOSE,CA 951484306
Performing Department
(N/A)
Non Technical Summary
Agriculture is fundamental to our existence. As our world grows, agriculture productivity must increase to keep pace without reducing profits or increasing environmental impact. This requires continuous optimization of agriculture processes. However, we cannot optimize something that we cannot measure accurately. Crop monitoring is an essential activity that provides such measurements on crop status.Crop monitoring involves checking on the crops for any plant health issues, for estimating the produce, for planning labor deployment, or for other farming decisions. Traditionally this is done manually with experienced farmers walking their fields and observing the crops. Clearly with many hundreds of acres of farms, it is impossible to do this manually; only a small fraction of the fields can be surveyed in this fashion. Several companies are working to solve this problem using technology, but their solutions don't fit well with prevalent farming practices. Manyusedrones, aircrafts, or satellite images, which workwell forproviding broad overview of farms, but fail to look under the canopy and observe issues that require per-plant pictures. Many require images of the fields to be uploadedonto a central serverbefore they can be processed.The step of uploading images to a central server is time-consuming and, in many cases, requires manual oversight to complete. This approach prevents crop monitoring from being real-time. In addition, the amount of data that is required to be uploaded limits the size of the farm area that can be surveyed. Many companies provide crop monitoring as a service, where they dothe task of image collection and processing and then provide a report. In such cases, their monitoringschedule may or may not match with what is required by the growers. If a grower needs to do the monitoring every day orweek for every farm they own, it is usually extremely difficult for suchcompanies to provide that level of service at a reasonable cost.Agrofocal's goal is to build a real-time crop monitoring system that is easy to use, is affordable, and fits seamlessly with the existing farming operations. The crop monitoring system can be mounted on any moving vehicle going through the farm. This allows the camera to get up-close, under-the-canopy view of the crop. The entire process of collecting the images and operating on those images to extract the insights is done on the vehicle itself in real-time, which eliminates the time-consuming process of uploading the images to a central server for processing. The insights are available and ready to be viewed on an app as soon as the image collection is done. Since in this technology the image collection and the processing happen together without any manual intervention, it can be completely owned and operated by the grower themselves, without requiring the involvement of the company providing the technology. This way the grower can mount the system on their vehicle and do the monitoring as many times as they require. They can also choose to mount this system on a vehicle that would be going through the farm to perform some other tasks. This can enable them to do the monitoring while performing the other tasks, such as weeding, pest-control, spraying, etc., thereby saving time and money.Agrofocal succesfully developed the system during Phase I for farm operations. During Phase II, the team will conduct field trials to assess the systems ability to determine crop yield and accuracy while monitoring operational robustness. In addition, the team will evaluate and refine the AI models based on the field trial data. Succesful completion of the Phase II objectives will set the stage from product launch and commercialization.Agrofocal's crop monitoring solution will provide useful insights in real-time, will be easy to use, and will fit seamlessly with existing farming operations. It will enable farmers to make informed decisions based on objective crop measurements. The insights provided will drive efficiencies in agriculture that will increase production, reduce costs, and lessen environmental impact. These efficiencies will positively change agriculture to the benefit of everyone.
Animal Health Component
20%
Research Effort Categories
Basic
5%
Applied
20%
Developmental
75%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
40272102020100%
Goals / Objectives
The overall goal for Phase II is to run extensive trials on the system developed in Phase I and solve next set of problems to get the product ready for commercialization and real-life deployment.Objective 1: Field trials for yield: Establish accuracy of yield forecast via extensive full-system trials in farms. Run extensive field trials with our system to predict yield in farms and establish the accuracy of the numbers by comparing them to the actual yield numbers obtained from the farm harvest. In case of discrepancies, find the root-cause, adjust the system and the AI model accordingly, and retry.Objective 2: Field trials for crop health: Establish accuracy of crop health issue detection via extensive full-system trials in farms. Run extensive field trials with our system to detect plant health issues and establish the accuracy of detection by confirming with in-person check that the issues exist at the GPS location where the system predicted they did. In case of discrepancies, find the root-cause, adjust the system and the AI model accordingly, and retry.Objective 3: Field trials for robustness: Establish system robustness via extensive full-system trials in farms. Run extensive field trials to subject our system to various stress scenarios and establish that the system either continues to work seamlessly through these scenarios or handles them in a manner that will feel logical to the user and allow them to take corrective action.Objective 4: AI: Continue to refine AI models to make them more accurate. Continue to refine and improve the AI models based on the learning from the field trials with the goal to achieve the overall system accuracy metrics as specified in Objective 1 and 2.
Project Methods
Objective 1: Field trials for yield: Establish accuracy of yield forecast via extensive full-system trials in farms. Methods: For strawberry, the AI and Computer Vision models running on the compute box will detect and count number of berries at different stages of development, from a flower to a ripe berry, in the images received from the camera. Given the knowledge provided by our agriculture collaborators about the typical time it takes per season for a berry to progress through each stage of development and become a ripe berry, the software will convert the counts of berries in different stages of development into a weekly yield forecast that goes out 5 weeks. Strawberry is a crop that gets harvested every week throughout the season. We will get the number of berries harvested per week from the fields where we run the trial and compare them to our weekly forecast. If an extreme weather event, like heavy rains or frost, happens that can significantly impact the actual counts from the field during the 5 weeks, then we'll discard this trial and begin a new one. (As a side note, the forecast count can be used to estimate how much fruit was lost for insurance purposes in case of such weather events).For almonds, the AI and Computer Vision models running on the compute box will detect and count number of nuts in the images of trees received from the camera. During Phase 1 work, we have developed a numerical method to account for occluded nuts on the trees. Using this method, we will estimate the total number of nuts on trees based on number of nuts seen by the camera. Unlike strawberry, almonds are harvested once in a season and all at the same time. Post-harvest, farmers already track their yield in terms of pounds of nuts per acre of their farms. In addition, they also estimate the average weight of almond kernel for their harvest. This information is important for them to price their almonds. We will use the pounds of nuts per acre and the average weight of the kernels from that field where we run our trial to estimate the number of nuts harvested from that field. We will then compare this number to our forecasted number of nuts.For grapes, the farmers care about two things when it comes to yield: the number of bunches and how many of them are ripe. The AI and Computer Vision models running on the compute box will detect and count number of bunches in the images of vines received from the camera. In addition, the compute box will run a ripeness detection model that looks for color on the grapes to classify bunches as ripe or not ripe. This model was developed during Phase 1 work. Like almonds, grapes are also harvested once in a season. For table grapes, the number of bunches harvested is tracked by the farmers as they handpick and pack in boxes in the field. For wine grapes, the harvest is done by machine and tons of grapes harvested per acre is tracked. Before harvest, the farmers estimate the average weight per bunch by sampling. This is important for them to estimate the sugar content of the grapes. We'll either use the direct bunch count in case of table grapes or use tons per acre and bunch weight information to estimate the bunch count for wine grapes. We'll compare this number to our forecast number of bunches times the percentage ripe.Objective 2: Field trials for crop health: Establish accuracy of crop health issue detection via extensive full-system trials in farms. As part of this objective, our goal is to establish the accuracy of crop health issue detection by our system by confirming through in-person check that the issue exists at the GPS location where the system predicted it did. What health issue to monitor for each crop was decided based on the feedback from our collaborating customers.Methods: For strawberry, we track three health issues: (i) Damaged berry, (ii) Chlorosis on leaves, (ii) Plant die-off. A berry can get deformed and damaged due to pest, like lygus, infestation or due to weather. In either case it is not harvestable. Knowing where these damaged berries are located will help farmers check for pest issues in that area and take quick action. Chlorosis is a condition of leaves losing chlorophyll. This can occur due to many reasons, such as mite infestation, nutrition deficiencies, water stress, or just old leave. Knowing where excessive amounts of Chlorosis is located will help farmers check for mite, nutrition, or water issue in that region. Plant die-off occurs when a young plant does not take root and dies. There may be several reasons for this, such as it did not get planted right, or nursery delivered unhealthy plants. Knowing where excessive amounts of plant die-offs are located will help farmers take appropriate actions.For grapes, we track two health issues: (i) Decay on bunches, and (ii) Chlorosis on leaves. Decayed grapes can be symptomatic of pest issues, like powdery mildew or botrytis. Knowing where they exit, especially before they spread widely, can help farmer take corrective actions early. Chlorosis on leaves for grapes can point to similar issues like they do strawberry. Knowing their location can help farmers take appropriate actions.Objective 3: Field trials for robustness: Establish system robustness via extensive full-system trials in farms. In this objective, our goal is to establish through extensive field test that the system is robust to real life operations in farms and can consistently operate for an entire day in rugged conditions. This is important before we can sell and deploy the system widely.Methods: As part of this objective, we'll stress test the system. We'll run in at different times of day to test with different light and shadow directions. To get different angles at which the sunlight, and therefore the shadow, falls on the plants, we'll test the system in following three segments of the day: 8am-11am, 11am-2pm, and 2pm-5pm. We'll also attempt to pick a mix of overcast and sunny days for testing. We'll test the system to run continuously in farms for up to 5 hours. This will ensure that we can at least cover the entire morning hours, from 7 to noon, during which majority of farming activity happens. We'll vary the vehicle speeds between 2mph to 4mph, which is the typical speed range between which the farm vehicles operate. We'll introduce random slowdown, stops and reversing of vehicle. We'll subject camera and compute box to regular vibrations as part of regular driving and exaggerated vibrations by revving the engine. We'll disconnect camera intermittently and permanently. We'll remove power from compute box to simulated power cable coming loose. We'll disconnect Agrofocal app from compute box.Objective 4: AI: Continue to refine AI models to make them more accurate. AI models need to be continually trained with new images and refined to ensure improved accuracy over time.Methods: The work for this objective will be dictated by learning from the trials done under Objective 1 and 2. These objectives will require us to retrain models for new scenarios encountered during real-life testing. As we go through the trials, we'll identify issue that is causing lower accuracy. In some cases, we may need to collect new set of images so that we can train the model for a scene that was never encountered before. In other cases, we may need to update the pre-processing step (e.g., tune motion tracking, handle glare in an image) or update the post-processing step (e.g., tune result filtering) to improve accuracy. The work on this objective will continue until we meet the success criteria outlined for Objectives 1 and 2. If time permits, we'll begin training AI models for other crop types, such as Citrus and Walnuts, to prepare us for expanding beyond the current three crop types, as we exit Phase II.

Progress 09/01/23 to 08/31/24

Outputs
Target Audience:The Agrofocal's crop monitoring system will have several audiences with different use cases. Some are listed below Farmers/Growers: For tracking yield and crop health Shippers: Predicting yield for the season from the sourcing growers Crop Advisor companies: Help their crop advisors do more by having full field inspection data on their smartphone. Theycan then visit the portions of the fields that show issues, instead of having to walk the entire field and miss the problem areas. Industry consortiums, like Table Grape commission, Almond Board, Strawberry commission: Get region while objective data on yield by sampling more acreage. Currently they sample a small number of farms and trees to arrive at this estimate. Insurance agents: Get expected yields and accurate acreage for the farms that they are insuring. Be able to tailor policies and save money for the farmers Bankers: Get expected yield information to assess the risk of their investments. Changes/Problems:We don't have any major changes in our plans that will impact our rate of spending or overall outcome of the project. One change that we faced was related to Objective #2, which we already mentioned. ForObjective 2, we changed plans because we realized we needed to collect more images from health issues before we can test for health-issue-detection accuracies. We plan to continue collecting more images and data on plant health issues during our 2nd year of performance and train a model to accuractly spot these issues early in their occurance. We don't foresee our overall goals require changing because of this What opportunities for training and professional development has the project provided?During the course of the this reporting periodproject, we hired 8undergraduate interns, including 1 female intern. They were from 2nd and 3rd year of college majoring in Computer Science, Data Science, or Engineering. One of them was a recent college graduate. They had limited experience in AI or Computer Vision algorithm development. During the course they all learnt the development flow for an AI Object Detection model end to end, starting from image preparation, labelling, cleaning training data, running AI model training, reviewing results for accuracy, and re-doing these steps until the desired accuracy has been reached. Then they continuously tested the model on new images, monitored its accuracy on unseen images, and collected images where the model did not perform well for future training. In the end, they all became proficient in AI model development. Two of the interns worked on the frontend web development. One of them worked in backend system and database development and testing. Overall they learnt a lot about AI and computer vision and how to stand and a working AI system end-to-end. How have the results been disseminated to communities of interest?We setup a booth at theStrawberry Field Day event organized by the Strawberry Commission of California at San Louis Obispo, CA. This event was well attended by Strawberry growers, researchers, and others associated with the Strawberry Industry in California. We explained the technology and it benefits to the visitors and followed through after the event with those interested in trying the technology with a demo. We have personally visited over 10 growers in their office, reachingthem through their public contact or through reference, and presented the technology to them. Avinash attended a panel session at Supercomputing 2023 titles "Agriculture Empowered by Supercomputing" to participate in discussions on how the community focused on high-performance computing can help solve pressing problems in agriculture. We intend to do more trade shows and grower outreach for the 2nd year of performance What do you plan to do during the next reporting period to accomplish the goals?Our plans for the four objectives for the next reporting period is as follows: Objective 1: We'll continue to do the trials across the different crops types, adding new farms and new growers to the list. This will ensure we are testing our model in new conditions and are also improving it using data from these new conditions. Our goal is to demostrate accuracy on new fields so as to give us (and the potential users) confidence on the stability of the results Objective 2: As mentioned earlier, forObjective 2, we changed plans because we realized we needed to collect more images from health issues before we can test for health-issue-detection accuracies. We plan to continue collecting more images and data on plant health issues during our 2nd year of performance and train a model to accuractly spot these issues early in their occurance. Objective 3: We are pretty much done with this objective. Based on the results from this year, we have established the robustness of our system and have been using it regularly in varying conditions without fail. We'll work on the last remaining piece next year -- sound an alert on power loss to the compute box during operation. Currently, the box silently stops without the user knowing -- we intend to convert this event into an alert. After this, the goals of this objective will be complete. Objective 4: We'll continue to improve accuracy of our results by developing supporting AI models based on analysis of the results of the trials. We have developed or are developing targeted refinements to post processing algorithms by taking advantage of crop specific characteristics. We are also in the process of developing and testing several supporting AI and compute vision models, such as sky detection, ground detection, and background detection to help filter out any inaccuracies that still exist. This work will continue in the next performance period

Impacts
What was accomplished under these goals? Objective 1: We did nearly 200 field trails across all crop types to forecast yield and compared numbers with harvest data. For Strawberry,We ran inspections either using our system mounted on a vehicle or being hand carried. The Agrofocal's AI and Computer Vision models detected and counted number of berries at different stages of development. Using this and other seasonal information we forecasted yield for next 5 weeks.We compared this forecasted yield with the actual yield from harvest provided to us by our collaborating growers. We conducted 42 trials this season with 2 growers covering over 460 acres. There were two instances when we experienced severe weather during our trials due to which significant number of berries had to be discarded because of quality reasons. Other than these two weeks, ourforecast data came within 10% of the actuals that were picked from the field, which meets our goals for this objective for Strawberry. We have conducted 29 trials for Almonds across 3 farms, 57 trials for Table grapes across 2 farms, 27 trials for Wine Grapes in 1 farm, and 44 trials for Citrus in 1 farm. We are waiting for harvest data for these crop types to do the comparison for yield forecast accuracy. Objective 2: we found that before we could work on accuracy of health issue detection, we need far more images of the health issues to be able to detect them reliably. The focus on this objective for this year then became collecting images on the health issues to keep improving the model.we collected more images for health issue especially fruit damage and decay. The work of establishing accuracy for health issue detection will be one of the focus areas for the next performance period. Objective 3: We made significant progress towards this objective. As part of nearly 200 inspections runs that we have done over this reporting period, we covered all scanerios that we had planned to stress test and documented learnings on parameters within which the system can work reliably for long periods of time and continue to provide accuract data. The conditions we tested for and their results were as follows: Lighting conditions: Best to perform between 7am-1pm); Speed: Best under5mph; Duration: Tested up to 5 hours without fail, could go longer) Motion:Slowdowns and Stops do affect the count; the software correctly adjusts according to travel speed. The short distance reverse (few feet) is not an issue; long distance reverses results in double counting and inaccuracies. Vibrations: No image quality issue for vibrations up to 2g @ 50Hz Camera:Intermittent loss of camera connection not an issue -- software would wait and reconnect.Permanent loss of camera connection is flagged on the app for user to take action Power: Loss of power to the compute box during usage -- handling this situation is not be implemented and test. Currently the inspections just stops abruptly Loss of connection the app: No impact on the running inspection. App can reconnect later and reacquire access seamlessly Objective 4: By analyzing the results of the trials, we found ways to improve our AI accuracy even further. We developed or are developing targeted refinements to post processing algorithms by taking advantage of crop specific characteristics. We are also in the process of developing and testing several supporting AI and compute vision models, such as sky detection, ground detection, and background detection to help filter out any inaccuracies that still exist. Overall, we successfully met the goals we had set for the 1st year of Phase 2 for Objective 1, 3, and 4. For Objective 2, we changed plans because we realized we needed to collect more images from health issues before we can test for health-issue-detection accuracies.

Publications