There are adaptive grippers, parallel grippers, suction grippers, and others I’m sure. Of course, sometimes it won’t be easy to modify an item, so you’ll have to find a gripper or workflow that accommodates it. However, often you can design something as a grippable attachment or directly embedded in a CAD model, as in the Powder Bot example above. This topic is about design choices when it comes choosing a combination of gripper + grippable component for various mat. sci. and chemistry workflows. As an example, typically I’d imagine a suction gripper working better with glass slides, requiring virtually no modification as long as a tight seal can be made, though repeatable positioning of the suction gripper relative to the glass slide could be challenging.
What are some good methods, design principles, and “gotchas” when adding a custom grippable component to an existing CAD design (lightweight custom labware, etc.) so that can be easily and robustly picked up with an off-the-shelf robotic arm?
I think there are a couple of points that need to be determined before designing (depending on the system pre-added to the different brands of robotic arms):
based on the relative position of the end-effector (hand) and the vision sensor (eye):
Eye-to-Hand has a separated distribution with a fixed field of view, and if the calibration accuracy of the camera is high, then the higher the accuracy of visual localization for grasping. Eye-in-Hand, on the other hand, fixes the robotic arm and the vision sensor together, and the field of view changes with the movement of the robotic arm. The closer the sensor is, the higher the accuracy, but if it is too close, the target may be out of the field of view.
The visual processing system behind:
Model-based: knowing what to grasp, using physical scanning beforehand, the data of the model is given to the robot system in advance, and the machine will only need to perform fewer operations in the actual grasping. Process: offline calculation → online perception (through the RGB or point cloud map, calculate the 3D position of each object) → calculation of the grasping point (in the real world coordinate system, according to the principle of dynamics to determine the center of gravity or collision avoidance and other requirements, to select the best grasping point for each object).
Half-Model-Based: does not need to fully predict the objects to be grasped, but needs a large number of similar objects to train the algorithm, so that the algorithm can effectively “segment” the image in the pile of objects and recognize the edges of the objects. Process: Train the image segmentation algorithm offline → Process the image segmentation online → Find the right gripping point.
And even for the same shape of the object, due to the surface reflectivity and the influence of ambient lighting, the difficulty of grasping in different scenarios also needs to be considered.
Whether to use multi-joint gripping clamp as below (5 DoF)
To add some thoughts here - there are three main approaches, I think, which are (i) adapting the consumables to fit the gripper (obiviously not possible for all things - e.g., vials are vials); (ii) adapting the gripper to fit the consumables (e.g., a gripper for vials; this extends to our multi-functional gripper with multiple ‘grasps’, or; (iii) using a general purpose gripper that can grasp anything, the end point of this perhaps being an N-fingered human-like hand with haptics, etc.
For now, my personal view is that haptics, etc., are more trouble than they are worth, and that bespoke grippers (possibly with tool changers, if you have a lot of different tasks) is the best way to go for the next few years
Great comments above!
Currently, we are doing (ii) and adapting the fingers of the hand-e gripper to several different consumables in our workflow.
I wonder if there is a reliable mechanism to automatically change grippers/fingers so that approach (ii) can work in 99% of scenarios with most consumables.
This can be either a locking/unlocking mechanism in some tool-changing platforms or a shape-shifting mechanism driven by actuators.