Hostname: page-component-848d4c4894-m9kch Total loading time: 0 Render date: 2024-05-02T04:35:32.056Z Has data issue: false hasContentIssue false

ShakingBot: dynamic manipulation for bagging

Published online by Cambridge University Press:  04 January 2024

Ningquan Gu
Affiliation:
School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, China
Zhizhong Zhang
Affiliation:
School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, China
Ruhan He*
Affiliation:
School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, China
Lianqing Yu
Affiliation:
School of Mechanical Engineering and Automation, Wuhan Textile University, Wuhan, China
*
Corresponding author: Ruhan He; Email: heruhan@wtu.edu.cn

Abstract

Bag manipulation through robots is complex and challenging due to the deformability of the bag. Based on the dynamic manipulation strategy, we propose a new framework, ShakingBot, for the bagging tasks. ShakingBot utilizes a perception module to identify the key region of the plastic bag from arbitrary initial configurations. According to the segmentation, ShakingBot iteratively executes a novel set of actions, including Bag Adjustment, Dual-arm Shaking, and One-arm Holding, to open the bag. The dynamic action, Dual-arm Shaking, can effectively open the bag without the need to take into account the crumpled configuration. Then, the robot inserts the items and lifts the bag for transport. We perform our method on a dual-arm robot and achieve a success rate of 21/33 for inserting at least one item across various initial bag configurations. In this work, we demonstrate the performance of dynamic shaking action compared to the quasi-static manipulation in the bagging task. We also show that our method generalizes to variations despite the bag’s size, pattern, and color. Supplementary material is available at https://github.com/zhangxiaozhier/ShakingBot.

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Zhang, H., Ichnowski, J., Seita, D., Wang, J., Huang, H. and Goldberg, K., “Robots of the Lost Arc: Self-Supervised Learning to Dynamically Manipulate Fixed-Endpoint Cables,” In: 2021 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2021) pp. 45604567.CrossRefGoogle Scholar
She, Y., Wang, S., Dong, S., Sunil, N., Rodriguez, A. and Adelson, E., “Cable manipulation with a tactile-reactive gripper,” The International Journal of Robotics Research 40(12-14), 13851401 (2021).Google Scholar
Wang, A., Kurutach, T., Abbeel, P. and Tamar, A., “Learning Robotic Manipulation through Visual Planning and Acting,” In: Robotics: Science and Systems XV, University of Freiburg, Freiburg im Breisgau, Germany, June 22-26 (Bicchi, A., Kress-Gazit, H. and Hutchinson, S., eds.) (2019). doi: 10.15607/RSS.2019.XV.074.Google Scholar
Lim, V., Huang, H., Chen, L. Y., Wang, J., Ichnowski, J., Seita, D., Laskey, M. and Goldberg, K., “Real2sim2real: Self-Supervised Learning of Physical Single-Step Dynamic Actions for Planar Robot Casting,” In: 2022 International Conference on Robotics and Automation (ICRA) (IEEE, 2022) pp. 82828289.CrossRefGoogle Scholar
Zhu, J., Navarro, B., Passama, R., Fraisse, P., Crosnier, A. and Cherubini, A., “Robotic manipulation planning for shaping deformable linear objects withenvironmental contacts,” IEEE Robotics and Automation Letters 5(1), 1623 (2019).CrossRefGoogle Scholar
Weng, T., Bajracharya, S. M., Wang, Y., Agrawal, K. and Held, D., “Fabricflownet: Bimanual Cloth Manipulation with a Flow-Based Policy,” In: Conference on Robot Learning (PMLR, 2022) pp. 192202.Google Scholar
Mo, K., Xia, C., Wang, X., Deng, Y., Gao, X. and Liang, B., “Foldsformer: Learning sequential multi-step cloth manipulation with space-time attention,” IEEE Robotics and Automation Letters 8(2), 760767 (2022).CrossRefGoogle Scholar
Chen, L. Y., Huang, H., Novoseller, E., Seita, D., Ichnowski, J., Laskey, M., Cheng, R., Kollar, T. and Goldberg, K., “Efficiently Learning Single-Arm Fling Motions to Smooth Garments,” In: The International Symposium of Robotics Research (Springer, 2022) pp. 3651.CrossRefGoogle Scholar
Hoque, R., Seita, D., Balakrishna, A., Ganapathi, A., Tanwani, A. K., Jamali, N., Yamane, K., Iba, S. and Goldberg, K., “Visuospatial Foresight for Multi-Step, Multi-Task Fabric Manipulation,” In: Robotics: Science and Systems XVI, Virtual Event/ Corvalis, Oregon, USA, July 12-16, 2020 (Toussaint, M., Bicchi, A. and Hermans, T., eds.) (2020). doi: 10.15607/RSS.2020.XVI.034.Google Scholar
Lin, X., Wang, Y., Huang, Z. and Held, D., “Learning Visible Connectivity Dynamics for Cloth Smoothing,” In: Conference on Robot Learning (PMLR, 2022) pp. 256266.Google Scholar
Chen, L. Y., Shi, B., Seita, D., Cheng, R., Kollar, T., Held, D. and Goldberg, K., “Autobag: Learning to open plastic bags and insert objects,” CoRR abs/2210.17217 (2022). doi: 10.48550/arXiv.2210.17217.Google Scholar
Mason, M. T. and Lynch, K. M., “Dynamic Manipulation,” In: Proceedings of 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’93), vol. 1 (IEEE, 1993) pp. 152159.Google Scholar
Hietala, J., Blanco-Mulero, D., Alcan, G. and Kyrki, V., “Learning Visual Feedback Control for Dynamic Cloth Folding,” In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2022) pp. 14551462.CrossRefGoogle Scholar
Wu, Y., Yan, W., Kurutach, T., Pinto, L. and Abbeel, P., “Learning to Manipulate Deformable Objects without Demonstrations,” In: Robotics: Science and Systems XVI, Virtual Event/ Corvalis, Oregon, USA, July 12-16, 2020 (Toussaint, M., Bicchi, A. and Hermans, T., eds.) (2020). doi: 10.15607/RSS.2020.XVI.065.Google Scholar
Jangir, R., Alenya, G. and Torras, C., “Dynamic Cloth Manipulation with Deep Reinforcement Learning,” In: 2020 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2020) pp. 46304636.CrossRefGoogle Scholar
Jia, B., Pan, Z., Hu, Z., Pan, J. and Manocha, D., “Cloth manipulation using random-forest-based imitation learning,” IEEE Robotics and Automation Letters 4(2), 20862093 (2019).CrossRefGoogle Scholar
Lee, R., Ward, D., Dasagi, V., Cosgun, A., Leitner, J. and Corke, P., “Learning Arbitrary-Goal Fabric Folding with One Hour of Real Robot Experience,” In: Conference on Robot Learning (PMLR, 2021) pp. 23172327.Google Scholar
Gao, C., Li, Z., Gao, H. and Chen, F., “Iterative Interactive Modeling for Knotting Plastic Bags,” In: Conference on Robot Learning (PMLR, 2023) pp. 571582.Google Scholar
Nair, A., Chen, D., Agrawal, P., Isola, P., Abbeel, P., Malik, J. and Levine, S., “Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation,” In: 2017 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2017) pp. 21462153.Google Scholar
Nakagaki, H., Kitagi, K., Ogasawara, T. and Tsukune, H., “Study of Insertion Task of a Flexible Wire into a Hole by Using Visual Tracking Observed by Stereo Vision,” In: Proceedings of IEEE International Conference on Robotics and Automation, vol. 4 (IEEE, 1996) pp. 32093214.Google Scholar
Nakagaki, H., Kitagaki, K., Ogasawara, T. and Tsukune, H., “Study of Deformation and Insertion Tasks of a Flexible Wire,” In: Proceedings of International Conference on Robotics and Automation, vol. 3 (IEEE, 1997) pp. 23972402.Google Scholar
Ha, H. and Song, S., “Flingbot: The Unreasonable Effectiveness of Dynamic Manipulation for Cloth Unfolding,” In: Conference on Robot Learning (PMLR, 2022) pp. 2433.Google Scholar
Gu, N., He, R. and Yu, L., “Defnet: Deconstructed fabric folding strategy based on latent space roadmap and flow-based policy,” arXiv preprint arXiv:2303.00323 (2023).Google Scholar
Kazerooni, H. and Foley, C., “A robotic mechanism for grasping sacks,” IEEE Trans. Autom. Sci. Eng. 2(2), 111120 (2005).CrossRefGoogle Scholar
Kirchheim, A., Burwinkel, M. and Echelmeyer, W., “Automatic Unloading of Heavy Sacks from Containers,” In: 2008 IEEE International Conference on Automation and Logistics (IEEE, 2008) pp. 946951.CrossRefGoogle Scholar
Hellman, R., Tekin, C., Schaar, M. and Santos, V., “Functional contour-following via haptic perception and reinforcement learning,” IEEE Trans. Haptics 11(1), 6172 (2018).Google Scholar
Klingbeil, E., Rao, D., Carpenter, B., Ganapathi, V., Ng, A. Y. and Khatib, O., “Grasping with Application to an Autonomous Checkout Robot,” In: 2011 IEEE International Conference on Robotics and Automation (IEEE, 2011) pp. 28372844.CrossRefGoogle Scholar
Hopcroft, J. E., Kearney, J. K. and Krafft, D. B., “A case study of flexible object manipulation,” Int. J. Robot. Res. 10(1), 4150 (1991).Google Scholar
Morita, T., Takamatsu, J., Ogawara, K., Kimura, H. and Ikeuchi, K., “Knot Planning from Observation,” In: 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), vol. 3 (IEEE, 2003) pp. 38873892.Google Scholar
Zimmermann, S., Poranne, R. and Coros, S., “Dynamic manipulation of deformable objects with implicit integration,” IEEE Robot. Autom. Lett. 6(2), 42094216 (2021).Google Scholar
Chi, C., Burchfiel, B., Cousineau, E., Feng, S. and Song, S., “Iterative residual policy: For goal-conditioned dynamic manipulation of deformable objects,” CoRR abs/2203.00663 (2022). doi: 10.48550/arXiv.2203.00663.Google Scholar
Lin, X., Wang, Y., Olkin, J. and Held, D., “Softgym: Benchmarking Deep Reinforcement Learning for Deformable Object Manipulation,” In: Conference on Robot Learning (PMLR, 2021) pp. 432448.Google Scholar
Xu, Z., Chi, C., Burchfiel, B., Cousineau, E., Feng, S. and Song, S., “Dextairity: Deformable manipulation can be a breeze,” arXiv preprint arXiv:2203.01197 (2022).CrossRefGoogle Scholar
Seita, D., Ganapathi, A., Hoque, R., Hwang, M., Cen, E., Tanwani, A. K., Balakrishna, A., Thananjeyan, B., Ichnowski, J., Jamali, N., Yamane, K., Iba, S., Canny, J. and Goldberg, K., “Deep Imitation Learning of Sequential Fabric Smoothing from an Algorithmic Supervisor,” In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2020) pp. 96519658.Google Scholar
Canberk, A., Chi, C., Ha, H., Burchfiel, B., Cousineau, E., Feng, S. and Song, S., “Cloth funnels: Canonicalized-Alignment for Multi-Purpose Garment Manipulation,” In: International Conference of Robotics and Automation (ICRA) (2022).Google Scholar
Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J. and Zaremba, W., “Openai gym,” CoRR abs/1606.01540 (2016). http://arxiv.org/abs/1606.01540 Google Scholar
Todorov, E., Erez, T. and Tassa, Y., “Mujoco: A Physics Engine for Model-Based Control,” In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012) pp. 50265033.Google Scholar
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F. and Adam, H., “Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation,” In: Proceedings of the European Conference on Computer Vision (ECCV) (2018) pp. 801818.Google Scholar
Seita, D., Jamali, N., Laskey, M., Tanwani, A. K., Berenstein, R., Baskaran, P., Iba, S., Canny, J. and Goldberg, K., “Deep Transfer Learning of Pick Points on Fabric for Robot Bed-Making,” In: The International Symposium of Robotics Research (Springer, 2019) pp. 275290.Google Scholar
Qian, J., Weng, T., Zhang, L., Okorn, B. and Held, D., “Cloth Region Segmentation for Robust Grasp Selection,” In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2020) pp. 95539560.Google Scholar
Ronneberger, O., Fischer, P. and Brox, T., “U-net: Convolutional Networks for Biomedical Image Segmentation,” In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 (Springer, 2015) pp. 234241.Google Scholar
Zhao, H., Shi, J., Qi, X., Wang, X. and Jia, J., “Pyramid Scene Parsing Network,” In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017) pp. 28812890.Google Scholar
Badrinarayanan, V., Kendall, A. and Cipolla, R., “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 24812495 (2017).CrossRefGoogle ScholarPubMed
Harris, C. and Stephens, M., “A combined corner and edge detector,” Alvey Vis. Conf. 15(50), 105244 (1988). Citeseer.Google Scholar
Canny, J., “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679698 (1986).Google Scholar