Source: UNIV OF MARYLAND submitted to NRP
LABOR AND PROCESS EFFICIENCY THROUGH AUTONOMOUS MACHINE VISION GUIDED ROBOTIC LOADING ON FOOD MANUFACTURING LINES
Sponsoring Institution
National Institute of Food and Agriculture
Project Status
COMPLETE
Funding Source
Reporting Frequency
Annual
Accession No.
1022104
Grant No.
2020-67017-31191
Cumulative Award Amt.
$494,000.00
Proposal No.
2019-06739
Multistate No.
(N/A)
Project Start Date
Jun 1, 2020
Project End Date
May 31, 2025
Grant Year
2020
Program Code
[A1364]- Novel Foods and Innovative Manufacturing Technologies
Recipient Organization
UNIV OF MARYLAND
(N/A)
COLLEGE PARK,MD 20742
Performing Department
Bioengineering
Non Technical Summary
Annually, vast amounts of agricultural and food products are produced which require processing before reaching consumers. Labor shortages and turnover are increasingly affecting facilities critical in the American food supply chain. Automating these processes will ease these labor pressures, increase output and enhance labor safety, while ultimately increasing availability of food and lowering costs. Our proposed manufacturing technology will make the difficult picking or loading processes easier, benefiting both consumers and food processors.The food processing and manufacturing industries, like so many others, find themselves in the midst of their second revolution in a hundred years. First, it was the machines that came to standardize and streamline the food production process for human workers and now it is AI driven automation that will relieve the tandem pressures of labor shortages and an ever present drive for greater efficiency and productivity. Despite all the advances made in the wake of the rising automation tidal wave, some very basic elements of the production process have remained too complex for machines to process. The picking and loading of objects onto a process line is one of the most universally applicable examples of this class.The crux of the problem lies in the diversity and unpredictability of shapes found even within the same product, as well as the inherently random orientation of objects dumped onto a conveyor. In order to properly process a product, a system must know exactly where and how the object lies, or else the machine will function inefficiently, even unusably so. This exact arrangement of unpredictable objects in a predictable manner has been a very challenging issue to tackle, and previous attempts to create such a system have failed. However, with recent advances in imaging technology, AI development, and advanced robotic control schemes, an opportunity has arisen for such a system to be developed.For example, poultry lines require workers to hang chicken during processing. Tasks require workers to identify, pick up, and load each chicken by its legs upside down to the moving chuckle lines. These tasks are very labor intensive (lifting 5-8 lbs/chicken), and have very high labor turnover rate, especially in the live hanging stage. Seafood processing such as oyster sorting and crab meat picking also share similar challenges. In coastal crab house meat picking, it is very challenging to pick and load each crab for machine processing. The crab's appearance, orientation, size, and overlapping of crabs in a loading table are very challenging factors for a machine to pick and load each item for processing. We propose a three-part system to detect, decide on, and dynamically pick diverse objects from a loading area onto a conveyor. We will use laser precision 3D imaging paired with a high-speed camera to map out the topography of a region of interest. Then, an advanced and highly trained AI will deduce the picking target. Finally, a robotic arm with specially designed fingers will transfer each object to a target conveyor in the precise manner for subsequent processing. This project will result in a significant impact within the food smart manufacturing. The new technology of automated loading will reduce the labor intensity and increase the reliability and productivity of food manufacturing plants. The intelligent loading system will create new opportunities to make subsequent processing missions achievable that otherwise would be given up because of the loading obstacle. Such machines will also raise the need for local high-skill labor training programs to operate and maintain the high-tech machinery. No longer will there be friction between the local labor supply and the food and agricultural industries due to a longstanding reliance on H1-B guest workers. The stigma of being disregarded by what was once a source of local pride will be lifted. This project has the potential to impact many rural communities, such as the Chesapeake Bay watershed community, poultry processing plants, and beyond, especially for the younger generations. It will enhance the competitiveness of U.S. agricultural and food products in the global market through smart agriculture technology advancement.
Animal Health Component
25%
Research Effort Categories
Basic
50%
Applied
25%
Developmental
25%
Classification

Knowledge Area (KA)Subject of Investigation (SOI)Field of Science (FOS)Percent
50153102020100%
Goals / Objectives
This proposal aims to develop machine vision-guided technology to solve the loading challenges and enable smart agricultural/food manufacturing. We will use a blue crab processing line as example to demonstrate our novel techniques, which can be transferred to other agricultural materials. Specific tasks are: 1) to develop a laser based 3D vision technique to obtain topography of on-line materials. 2) to develop deep learning methods to recognize morphological features of objects and generate coordinates for robotic picking and loading. 3) to develop control schemes to pick up on-line materials and load them in the correct orientation. 4) to conduct tests to verify real world performance and demonstrate to the stakeholders. The outcome of this project will be automated loading technology for food manufacturing lines. The technology will be scalable to other food and agricultural commodities and engineering applications for food manufacturing automation.
Project Methods
The methods for this project are to develop vision-guided technology to solve the loading challenges and enable smart agricultural/food manufacturing. We will use a blue crab processing line as example to demonstrate our novel techniques, which can be transferred to other agricultural materials. First, we will develop a RGB-D laser based precision 3D vision technique to obtain the topography of on-line materials. Next, we will develop deep learning algorithms to recognize morphological feature characteristics of objects and generate coordinates for 3D vision-guided robotic picking and loading, where trained convolutional neural networks, 3D point clouds preprocessing, and action points predictions, and control strategies for the robotic paths and actions will be established. Then, we will develop control schemes to pick up on-line materials and load them in the correct orientation, and finally, we will conduct tests to verify real world performance and demonstrate to the stakeholders.

Progress 06/01/23 to 05/31/24

Outputs
Target Audience:We engaged with businesses in the local seafood sector that were interested in adopting automated methods for picking meat from Chesapeake Blue crabs. Our objective was to learn about their specific needs, challenges, and expectations to guide our research efforts effectively. We learned that crab meat excision is a delicate process with only a few trajectories that extract the crab lump meat without shredding or damage. Further, we learned that the value of extracted lump meat is roughly halved ifthe meat is damaged or shredded. After several sessions of practicing crab picking with local experts, we determined that crab picking requires a combination of carving and fast, chisel-like movements. A conventional model-free reinforcement learning technique would require 1000s of crabs to learn this crab-picking trajectory, but our stakeholders made it clear that crabs are an expensive and valuable commodity. As a result, we developed an alternative training scheme that requires only a few hundred crabs for training. Specifically, we designed and built a precise pose tracker for recording crab-picking movements from a human. These recorded movements will then train an efficient imitation learning model that enablesautomated crab picking.Concurrently, we are disseminatingour designs and findings with academic peers specializing in smart food processing. We aimto share findings, discuss potential advancements, and refine our methodologies through scholarly discourse and academic conferences. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?Graduate and undergraduate students were trained in cutting-edge technologies, including advanced computer vision algorithms, machine learning techniques, and algorithmic optimization. Students were trained to design, build, and develop various intricate parts using machine shop and 3D printing techniques. Further, various cutting-edge imitation learning models were explored and developed. Finally, students were trained to develop a unique laser-based vision system from scratch using Python and C# code. How have the results been disseminated to communities of interest?The results have been disseminated at the ASABE and NEBAC annual conferences, the UMD Agricultural Extension specialist, the NIFA Annual Project Directors' Symposium, and throughdirect interactions with the public during Maryland Day. What do you plan to do during the next reporting period to accomplish the goals?There are several aims that we plan to take our research toward. Automation of crab picking is a challenging task because crabs are complex and have nonhomogenous morphology, especially inside lump meat cavities. Further, levels of lump meat inside the crab cavity tend to vary with the time of season due to mating behaviors, etc. However, we plan to enhance our system capabilities and generalize detection algorithms to encompass various agricultural commodities beyond Chesapeake Blue Crabs. We intend to test and deploy our system further and measure the success rate of the full robotic task.

Impacts
What was accomplished under these goals? In this reporting period, the following tasks were accomplished: 1) a laser-based 3D vision technique to record fast and precise hand movements during crab picking was developed, which currently operates at a speed of 140 frames per second and has a mean end effector tip tracking resolution of 2mm. It is assumed that the human hand is holding an instrument during pose trajectory recording, and the tip of this instrument is tracked (end effector tip) in 6 DOF (position and orientation) over time, even when the tip of the instrument is occluded. A publication related to this work will be submitted in the coming weeks. 2) and 3) In order to develop efficient imitation learning models to automate crab picking, a dataset of exemplary crab picking trajectories and associated performance labels must be developed. To this end, a library of 50 crab trajectories has already been recorded, and more trajectories will be recorded in the coming weeks. Further, imitation learning learning models within the realm of behavior cloning and inverse reinforcement learning were explored. A rough parametric model of crab picking trajectory was also developed based on this dataset, which will be used to accelarate deep imitation model training. As of now, we have fullfilled the first three aims of this project, and will work to complete the final aim during the remaining duration of this project.

Publications

  • Type: Peer Reviewed Journal Articles Status: Published Year Published: 2024 Citation: Ali MA, Wang D, Tao Y. Active Dual Line-Laser Scanning for Depth Imaging of Piled Agricultural Commodities for Itemized Processing Lines. Sensors. 2024; 24(8):2385. https://doi.org/10.3390/s24082385


Progress 06/01/22 to 05/31/23

Outputs
Target Audience:We engaged with businesses in the local seafood sector interested in adopting automated methods for loading Chesapeake Blue crabs to processing machines as part of a pseudo-autonomous system. Our objective was to learn about their specific needs, challenges, and expectations to guide our research efforts effectively. We learned that there are two types of loading. The first type of loading is for cooked crabs that need to be cleaned out (de-backed), in which case we would image/detect and load raw crabs to a de-backing and cleaning machine. The second is loading for transferring the cleaned crabs to a processing machine. In this case, we would be imaging, detecting, and loading cleaned crabs. The main difference in both tasks is the features that need to be detected by our detection algorithms. Concurrently, we disseminated our designs and findings with academic peers specializing in smart food processing. We aimed to share findings, discuss potential advancements, and refine our methodologies through scholarly discourse and academic conferences. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?Graduate andundergraduate studentswere trained in cutting-edge technologies, including advanced computer vision algorithms, machine learning techniques, and algorithmicoptimization. Computer vision algorithms were developed using the Detectron2 framework, where pre-trained weights were loaded. Undergraduate students spearheaded the initiative to translate the previously developed Python reconstruction algorithms into C++ algorithms. How have the results been disseminated to communities of interest?The results have been disseminated to the academic community through conference oral presentations. We presented the work at two conferences on theinternational and regional levels. The oral presentations won the first-place award. We intend to disseminate our work through academic journal articles. Threejournal articles are currently being prepared. List of awards forpresentations of desseminations to professional fields and extension specialists: Mohamed Ali (my PhD student) won 3rd prize ($100) for the 2022 NABEC conference Annual Graduate Oral Presentation Competitions and Awards. August, 2022. Mohamed Ali (my PhD student) won the 2022 ASABE Student Oral Competition Award for "Active-Laser Scanning and Intelligent Picking for Automated Loading to Process Machines" first place awarded by the Information Technology, Sensors, and Control Systems Technology Community. July 20, 2022. Mohamed Ai. 2nd place award. 2022. Envisioning 2050 in the Southeast: AI-Driven Innovations in Agriculture presentation. Auburn University. 2022. What do you plan to do during the next reporting period to accomplish the goals?There are several aims that we plan to take our research toward. Loading crabs to processing machines is a challenging task because crabs are complex and have nonhomogenous morphology. However, weplan to enhance our system capabilities and generalize detection algorithms to encompass various agricultural commodities beyond Chesapeake Blue Crabs. We intend to use the recently developed the Segment Anything Model to detect and approach various agricultural commodities. Secondly, we plan to further test and deploy our system and measure the success rate of the full robotic task. Finally, we intend to enhance our system with more complex tasks, such as disentangling crabs from each other as they are removed from a pile. Our approach will extend knowledge and tasks done in the Deep Imitation Learning field.

Impacts
What was accomplished under these goals? In this reporting period, we extended our former working prototype with additional features to improve our intelligence to front-end-load Chesapeake Blue crabs to a processing machine. Firstly, we advanced our previously developed depth imaging system with hardware and software to accelerate data processing. Firstly, a new Matrox Frame Grabber with an onboard FPGA has been employed to increase the image acquisition bandwidth and processing. We further extend our image processing algorithms in C++ to accelerate reconstruction speed. Our efforts decreased our image acquisition time to just 700 milliseconds and the reconstruction speed to 1.1 seconds for a total of 1.8 seconds. The imaging accuracy Secondly, we extended our computer vision algorithms for 3D crab detection, localization, and orientation. We aimed to improve our segmentation/keypoint average precision and recall. We attempted to train several variations of the Mask R-CNN on the crab dataset. Our algorithms were trained and validated using the following backbones: ResNet-50 and ResNet-101 with Feature Pyramidal Network, Dilated-C5 layer, and the original C4 layer, and ResNeXt-101. We also trained these networks using colored images only and RGB-D data. Our detection algorithms have consistently performed better using the additional 4th channel. ResNext-101 proved to be the highest-performing network. Additionally, we furthered our computer vision algorithms by utilizing state-of-the-art machine learning components such as Transformers. We trained and inferred on three transformer-based algorithms: DETR, MaskFormer, and Mask2Former. Mask2Former instance segmentation precision and recall exceeded all other computer vision networks. As of now, we havefulfilled the first three aims of the project.

Publications

  • Type: Conference Papers and Presentations Status: Published Year Published: 2022 Citation: Ali MA, Wang D, Tao Y. Active-Laser Scanning and Intelligent Picking for Automated Loading of Agricultural Commodities to Processing Machines. Oral presentation at American Society of Agricultural and Biological Engineers, 2022 Annual International Meeting; 2022 July 17-20; Houston, Texas. ASABE paper # 2200459. doi:10.13031/aim.202200459.
  • Type: Conference Papers and Presentations Status: Other Year Published: 2022 Citation: Ali MA, Wang D, Tao Y. Automated Front-End Loading and Intelligent Picking of Chesapeake Blue Crabs to Processing Machines. Oral presentation at Northeast Agricultural / Biological Engineering Conference; 2022 July 30-August 2; Edgewood, Maryland
  • Type: Conference Papers and Presentations Status: Published Year Published: 2022 Citation: Tao, Y. 2022. USDA NIFA PI Meeting, " Labor and Process Efficiency Through Autonomous Machine Vision Guided Robotic Loading on Food Manufacturing Lines" (Panel) USDA NIFA, PPT Presentation. July 7, 2022.
  • Type: Conference Papers and Presentations Status: Published Year Published: 2022 Citation: Tao, Y. 2022. Invited panel speaker. AI and Imaging?guided Smart Food Processing Lines. ASABE International Conference. Huston, July 17-20, 2022.


Progress 06/01/21 to 05/31/22

Outputs
Target Audience:In this reporting period, our targeted audiences were local seafood industry partners that expressed interest in an automated method for loading Chesapeake Blue crabs to processing machines. Our target audience also included academics in the field of agricultural smart food processing. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?This project supported students' training in several technical avenues and educational levels. K12, undergraduate and graduate students worked on hardware-related projects that utilize Computer-Aided Design to fabricate the imaging system setup. Students used 3D printingtechnology to fabricate housing for the laser galvanometric units that housed the motors. Students developed hands-on experience in cutting and drilling 80/20 aluminum as well as High-Density Polyethylene (food-safe) plastics for countertops. Students are also trained on software control schemes and analytics. Robotics middleware such as Robot Operating System was emphasized. Students used packages such as Moveit for inverse kinematic solutions and motion planning of the Universal Robot 5e manipulator. Students had the opportunity to learn deep learning modules such as Pytorch and TensorFlow. Computer vision algorithms were generated in the Detectron2 framework where pre-trained weights were developed. One PhD student graduated and obtained tenure-track assistant professor position at the University of Arkansas. One high school student took AP Science on this project for two semester in HS senior year and learnedrobotic end effort development with the project team. The PhD students on the project are making excellent progresses in the research and development toward the innovative technologies. How have the results been disseminated to communities of interest?The results have been disseminated to the academic community through conference oral and poster presentations, as well as conference papers. We also made a video presentation for the entire front-end loading process which was shown to our university president and provost. This video was also played at the Maryland Robotic Center to preview progress in our laboratory and receive feedback from our peers. What do you plan to do during the next reporting period to accomplish the goals?We plan to enhance our system's speed and generalize our detection algorithms to encompass various agricultural commodities beyond Chesapeake Blue Crabs. Multiple aspects of our system are generatingbottlenecks for the overall speed. Firstly, we are using a research-grade collaborative robot that has joint limits onacceleration and velocity. Therefore, in the upcoming reporting period,we aim to integrate a high-speed industrial delta robot with a high payload and range to achieve higher throughput. Secondly, our 3D reconstructionalgorithms are computationally expensive and have taken up to 17 seconds to reconstruct a single depth map. In the next reporting period, we aim to remedy this reconstruction problem in several ways. We intend to no longeruse interpreted languages (python) that are not strongly typed, and instead, use faster low-level syntax such as C++. We also plan to update our computer hardware with faster processors that have more cores to exploit its multithreading/multiprocessing capabilities.Thirdly, we will offload some of the computations from the processor to a frame grabber that is equipped with Field Programmable Gate Array (FPGA). FPGA gives us the flexibility to customize computational units tailored to large-scale image computation on the hardware level. Finally, we plan to incorporate state-of-the-artGraphical Processing Units for faster detection. According to ideal calculations, we should acquire images, reconstruct a depth map, and detect agricultural commodities in under 2 seconds or 30 objects per minute. To generalize our system for additional agricultural commodities (aside from Chesapeake Blue crabs), we aim to generate RGB-D data of piled products and build labeled datasets for deep learning algorithms. This dataset will be used to train computer vision algorithms for instance segmentation and pose estimation. To further generalize our research for agricultural products that further resemble specialty crops. We plan to extend our pick-and-place task to encompass vacuum-based end-effectors with custom-made suction cups. Finally, we plan to test the system and have the system compete with a human operator to demonstrate its effectiveness and its capabilities.

Impacts
What was accomplished under these goals? Through extensive research and development efforts, we were able to establish a working prototype for intelligent front-end loading of Chesapeake Blue crabs to a processing machine. We extended our previously developed line laser scanning system to scan using two lasers. We utilize lasers that have different colors (red and green) and generate two depth maps. Since each of the depth maps has gaps caused by occlusions of laser illumination, we combine the two maps to generate a final result. We developed a Region-based Convolution Neural Networkwith a box, mask, and keypoints head for detecting Chesapeake Blue crabs from apile. The detection algorithm was built using the Detectron2 framework built on top of Pytorch. The algorithm can accurately detect crabs and their orientation in 3D. The detected crab masks are individually superimposed on the depth map and are listed in order to determine the highest priority crab for removal. Subsequently, keypoints are superimposed on the depth map to outline the crabs' orientation in 3D. Finally, we usethe bounding box's center point and forward it to a camera-to-robot calibration code to determine the location in real-world coordinates. These coordinates are ultimately used to direct aUniversal Robot 5e, six(6)Degree of Freedom manipulator to hover above the crab with the end-effector facing down. Therobot wields a pneumatically actuated Soft gripper end effector, which is specifically sized for crabs. The gripper has enough force to hold onto crabs while in motionwhile staying compliant thus preventing the crabs from being crushed. The robot moves in pre-defined movements to place the crab on the processing machine. Up till now, we have fulfilled the first three aims of the project.

Publications

  • Type: Conference Papers and Presentations Status: Awaiting Publication Year Published: 2022 Citation: Ali, M., Wu, B., Wheeler, C., Wang, D., Tao, Y., Active-Laser Scanning and Intelligent Picking for Automated Loading of Agricultural Commodities to Processing Machines. ASABE Paper No.2200459. July 17-20. DOI: https://doi.org/10.13031/aim.202200459
  • Type: Conference Papers and Presentations Status: Published Year Published: 2022 Citation: Ali, M., Wu, B., Wang, D., Tao, Y., Active-Laser Scanning and Intelligent Picking for Automated Loading of Agricultural Commodities to Processing Machines. March 2022. Poster. Auburn, Alabama.


Progress 06/01/20 to 05/31/21

Outputs
Target Audience:In this reporting period, our target audience has been industry partnersand academics working in the automation field. We implemented a novel dual laser 3D range imaging technique that aids in reconstructing surface profiles of agricultural commodities. Our work aims to provide technological advancements that enhance jobs in communities that typically relyon repetitve tasks that are dangerous and tedious. These tasksinclude sorting and loading objects to processing lines. These communities are usually concentrated in ruraland industrial regions of the country. Changes/Problems: Nothing Reported What opportunities for training and professional development has the project provided?The projectallowed both graduate and undergraduate students todevelop hardware control algorithms in C++. These algorithms aim tocontrol thegalvenometers and camera exposure time. Furtheron, students used the Matrox Imiging Library (MIL) to cature images at a cropped area in order to maintain fast frame rates. Students also learned valuable optical and triagulation knowldege that aided in developing the triangulation geometric model. Finally, students used MATLAB to process and 3D reconstruct images. How have the results been disseminated to communities of interest?Results and subsequent analysis have been collected. We intend to dessiminate our results in journal and conference paper publications. Theresults will be presented in the 2021 American Society of Agricultral and Biological Engineers annual internationconference. What do you plan to do during the next reporting period to accomplish the goals?We plan to migrate our reconstruction algorithms into the Robotic Operating System's platform. Subsequently, we'll develop deep learning algorithms to segmentagricultral commodities and establish key morphological features to pick them from piles of objects on the loading section. Finally, we will calibrate our industrial robot to our camera system coordinate.

Impacts
What was accomplished under these goals? Wedeveloped an active dual line-laser triangulation imaging method for 3D depth imaging of high throughput industrial lines. The novel geometric configuration reconstructs obstructed regions of piled objects. This is the first step of the vision guided robotics we aim to develop as Objective 1. The topography we developed has sub-millimeter resolution and it is fairly to develop at low cost.

Publications

  • Type: Conference Papers and Presentations Status: Submitted Year Published: 2021 Citation: Wang, D., M. Ali, J. Cobau1, Y. Tao. 2021. Designs of a customized active 3D scanning system for food processing applications. ASABE Paper No. 2100388. July 12-16,2021. DOI: https://doi.org/10.13031/aim.2100388
  • Type: Conference Papers and Presentations Status: Awaiting Publication Year Published: 2021 Citation: Ali, M., D. Wang, A. Hevaganinge, J. Cobau, Y. Tao. 2021. Robotic extraction of Chesapeake crab meat. ASABE International Annual Meeting. ASABE Paper # 2100134. July 12-16.