Progress 09/01/23 to 08/31/24
Outputs Target Audience:
Nothing Reported
Changes/Problems:
Nothing Reported
What opportunities for training and professional development has the project provided? The two doctoral students (Ryan Deuling and Arshia Akbari) directly supported by this award are mentoring two undergraduate students (Patrick Fong and Daniel Chow) on designing and manufacturing end effectors for fruit grasping and detaching. They are exploring two ideas - vine gripper and cable-driven gripper. Ryan and Arshia attend the southern california robotics symposium and presented a poster about this project focusing on the design and development of the flexible robot. How have the results been disseminated to communities of interest?
Nothing Reported
What do you plan to do during the next reporting period to accomplish the goals? Develop a Fruit Harvesting End Effector: We will develop an end effector that can grasp and detach fruits and integrate it with the robot manipulator. Vacuums will be used to grasp the fruit. A special mechanism will be designed to twist and pull the fruit. Develop a Dynamic Motion Control System: We will use data-driven methods to model the dynamics of the robot and develop control algorithms for dynamic follow-the-leader motion. Integrate Cameras with Robot: We will select commercially sourced cameras and integrate the cameras into the mobile platform and the robot manipulator. Mechanisms will be designed or modified from commercial products to steer the cameras. Develop a Continual Learning Framework: Implement adaptive neural rendering methods to dynamically refine renderings of evolving scenes without requiring all data and complete retraining. Detect and Render Harvesting-Specific Targets and Obstacles: We will learn to detect critical components, such as fruits and branches, during the rendering process. We will then develop a method to render selective targets and obstacles at high resolution and computational efficiency. Integrate and Test Robotic Navigation: Our goal is to combine rendering and detection models with robotic navigation. This will be assessed in both simulated and real-world canopy environments to ensure system robustness and reliability.
Impacts What was accomplished under these goals?
Goal 1: we have designed and developed a robotic manipulator featuring a telescoping continuum structure capable of in-plane bending, enabling follow-the-leader motion that is suited for navigating unstructured tree canopies. The manipulator and its actuation systems are housed within a framing structure that is mounted onto a mobile base, transitioning the workspace from 2D to 3D. Robot Design: The continuum robot consists of two telescoping tubular structures, denoted as the inner and outer tubes. The modular design of each tubular structure enables customization of the robot's, with the current design having an extension up to 600 mm. The modules of each tube are connected via flexible backbones that consist of four cables to provide suitable stiffness. Each tube is driven by two 1 mm diameter nitinol cables positioned 180° apart and aligned for in-plane manipulation. A flexible track is mounted along the inner tube and engages with bearing-based rollers that are secured to each of the outer tube modules, allowing for the tubes to be telescoping. Flexible tubing is secured along the inner tube, enabling the retracted segments to maintain contact with a pair of rigid linear rods mounted within the framing structure, ensuring retracted segments remain straight. Actuation Design: The actuation system features a linear actuation unit and a cable-driven actuation unit for each tube, providing control over each tube's extension and curvature. The cable-driven actuation employs a rack-and-pinion gear system, where the driving cables of each tube are secured to the rack gears. A stepper motor is used to drive the pinion gear, translating linear displacement of the rack gears and generating equal but opposite motion in the driving cables. This bidirectional displacement induces a bending configuration in the extended segments of the tubes. The components of this actuation unit are mounted onto a platform, where the tube is secured to. The linear actuation units, housed within the framing structure, use a ball-screw mechanism driven by a stepper motor to translate motion of a carriage that is secured to its respective tube. The tubes are mounted to the carriage through an adapter that is attached to the tube's cable-driven actuation platform. Motion Control: The robot's operation consists of the extension/retraction of a proximal and distal segment to follow along a curvilinear trajectory that can bypass branches and achieve a target fruit. For extending the proximal segment, both tubes extend equally while cable displacements are adjusted to match the proximal segment's desired curvature. For extending the distal segment, the inner tube's extension halts, and the outer tube continues extending and adjusting the cable displacements to match the distal segment's desired curvature. This process can be reversed to retract the robot along the same trajectory. With the two segments being able to have independent curvatures and lengths, the configuration of the manipulator is capable of both c and s-shaped bending profiles. To determine the control scheme required for following along different curvilinear trajectories, the motion of the robot is characterized to relate the robot's actuator, configuration, and task spaces. First, a wide range of actuation parameters are deployed while images of the robot are simultaneously captured. These images are then inputted into a vision-based measurement system that measures the curvature and extended lengths for each segment and are used to calculate the robot's end tip position. By relating these measurements to the actuation parameters at these instances, the mapping between the task, configuration, and actuator spaces is achievable. Motion Evaluation: Evaluation of the manipulator's motion capabilities are performed through experimentation to assess its follow-the-leader (FTL) motion and task space navigation performance. These tests utilized the characterization performed to determine the control inputs for achieving the configurations estimated to follow the desired trajectories. In the FTL study, the robot's ability to accurately follow a randomly generated reference trajectory was assessed. Images captured during the extension/retraction were analyzed to measure the manipulator's end tip position, which were then compared to the reference trajectory. For the task space navigation study, the robot was tested for its precision in reaching a specified target while adopting an optimal configuration to avoid a randomly generated obstacle. Once the manipulator was estimated to achieve the target destination, an image was captured to measure the end tip position, which was compared to the reference target coordinates. The average errors observed in the FTL and task space navigation experiments were 9.5 mm and 8.8 mm, respectively. Goal 2: We focuse on advancing neural rendering techniques to process visual data captured by a robot navigating through dense canopy environments. By leveraging state-of-the-art models like Nerfacto and Splatfacto, we aim to reconstruct and render high-quality 3D scenes from the robot's collected data. The pipeline includes capturing images, estimating camera poses and scene features, training neural rendering models, and synthesizing accurate 3D representations. Our main objectives are to 1) Design and develop computer vision models to detect and localize harvesting targets (e.g., fruits) and obstacles (e.g., branches) in canopy environments, 2) Enhance neural rendering techniques to achieve high-quality 3D reconstructions that accurately capture fine details of complex scenes, enabling effective integration with detection and navigation systems. By improving rendering quality and integrating object detection models, we aim to enable robots to navigate canopy environments efficiently, avoid obstacles, and focus on relevant targets for agricultural tasks. Significant progress has been made in data collection, processing, and rendering. Images captured by the robot's camera were processed using COLMAP, which reliably estimated camera poses in well-textured, well-lit scenes. However, it faced challenges with reflective surfaces, shadows, and dynamic lighting conditions. While reflective surfaces resulted in incomplete reconstructions, shadows caused edge blurring in some regions. We addressed these issues by capturing videos with consistent lighting conditions and ensuring sufficient texture in the scenes. The processed images were used to train neural rendering models. Among the tested methods, Splatfacto (Gaussian splatting) outperformed Nerfacto (NeRF) in both speed and visual quality. Splatfacto rendered a 26-second video in approximately 30-35 seconds and produced clearer details of branches and fruits, particularly in scenes with complex geometry. A collection of approximately 300 images at a resolution of 540x960 pixels was sufficient to achieve satisfactory 3D reconstructions with a resolution of 1080p.As a result, our current system can reliably render salient structures of the tree canopy and fruit. Goal 3: A preliminary in-field demonstration was performed to showcase the robot's ability to perform agricultural tasks in the unstructured environments that are found in citrus trees. For this demonstration, the manipulator is mounted onto the mobile base and the robot is controlled via a remote control by a user in real time. First, a target is chosen, and the mobile base is positioned that the target is within the manipulator's working space. Next, the manipulator is extended until the target is reached, while the curvature of the two segments is adjusted as needed to avoid any obstacle branches. This demonstration highlights the robot's ability to perform FTL motion, showing it can navigate these types of environments to perform agricultural tasks.
Publications
|