Skip to main content
Log in

Learn to Grasp Unknown-Adjacent Objects for Sequential Robotic Manipulation

  • Regular paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Grasping unfamiliar-adjacent objects based on limited previous information is a daunting task in robotic manipulation. It is substantially more difficult to grasp an object in such a scenario than grasping secluded objects. For this reason, recent solutions typically require non-prehensile actions prior to grasping (e.g., pushing, toppling, squeezing or rolling). However, these solutions play a loose role and causing delays. The non-prehensile action should have intended utility and effect on the consecutive grasping action, because it is a sequential decision-making problem. This paper takes a step towards solving the issue by introducing a self-learning strategy to manipulate unknown objects in challenging scenarios based on minimal prior knowledge. The developed system learns jointly pre-grasping (non-prehensile shifting) and grasping (prehensile) actions using model-free deep reinforcement learning. The agent comprehends sequences of pregrasp manipulations that purposely lead to successful potential grasps. The system is object-agnostic, which does not require task-specific training data or predefined object information (e.g., pose estimation or 3D CAD models). The proposed model trains end-to-end policies (from only visual observations to decisions-making) to seek optimal manipulating strategies. Perception network maps visual inputs to actions, as dense pixel-wise Q-values, and learns quickly through trial-and-error manner. Experimentation findings have demonstrated the effectiveness of the joint learning between pregrasp manipulation and grasp policies, in which the success rate of grasping has greatly increased. The proposed system has been experimentally tested and validated in simulations and real-world settings using 6DOF robot with two-finger gripper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Code or data availability

Not applicable.

References

  1. Stojanovic, V., Nedic, N.: A nature inspired parameter tuning approach to cascade control for hydraulically driven parallel robot platform. J. Optim. Theory Appl. 168(1), 332–347 (2016)

    Article  MathSciNet  Google Scholar 

  2. Jiang, Yi., Gao, W., Na, J., Zhang, Di., Hämäläinen, T.T., Stojanovic, V., Lewis, F.L.: Value iteration and adaptive optimal output regulation with assured convergence rate. Control. Eng. Pract. 121, 105042 (2022)

    Article  Google Scholar 

  3. Xin, X., Yidong, Tu., Stojanovic, V., Wang, H., Shi, K., He, S., Pan, T.: Online reinforcement learning multiplayer non-zero sum games of continuous-time Markov jump linear systems. Appl. Math. Comput. 412, 126537 (2022)

    Article  MathSciNet  Google Scholar 

  4. Tao, H., Li, X., Paszke, W., Stojanovic, V., Yang, H.: Robust PD-type iterative learning control for discrete systems with multiple time-delays subjected to polytopic uncertainty and restricted frequency-domain. Multidimension. Syst. Signal Process. 32(2), 671–692 (2021)

    Article  MathSciNet  Google Scholar 

  5. Sahbani, A., El-Khoury, S., Bidaud, P.: An overview of 3D object grasp synthesis algorithms. Robot. Auton. Syst. 60(3), 326–336 (2012)

    Article  Google Scholar 

  6. Weisz, J., Allen P. K.: "Pose error robust grasping from contact wrench space metrics," in 2012 IEEE international conference on robotics and automation, pp. 557–562: IEEE (2012)

  7. Xiang, Y., Schmidt, T., Narayanan, V., Fox, D.: "Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes," arXiv preprint arXiv:1711.00199, (2017)

  8. Quillen, D., Jang, E., Nachum, O., Finn, C., Ibarz, J., Levine, S.: "Deep reinforcement learning for vision-based robotic grasping: A simulated comparative evaluation of off-policy methods," in 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 6284–6291: IEEE (2018)

  9. Qin, F., Xu, D., Zhang, D., Li, Y.: Robotic skill learning for precision assembly with microscopic vision and force feedback. IEEE/ASME Trans. Mechatron. 24(3), 1117–1128 (2019)

    Article  Google Scholar 

  10. Li, R., Qiao, H.: A Survey of Methods and Strategies for High-Precision Robotic Grasping and Assembly Tasks—Some New Trends. IEEE/ASME Trans. Mechatron. 24(6), 2718–2732 (2019)

    Article  Google Scholar 

  11. Goyal, S., Ruina, A., Papadopoulos, J.: Planar sliding with dry friction part 1. limit surface and moment function. Wear 143(2), 307–330 (1991)

    Article  Google Scholar 

  12. Mason, M.T.: Mechanics and planning of manipulator pushing operations. The International Journal of Robotics Research 5(3), 53–71 (1986)

    Article  Google Scholar 

  13. Redmon, J., Angelova, A.: "Real-time grasp detection using convolutional neural networks," in 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1316–1322: IEEE (2015)

  14. Li, Y., Chen, Y., Yang, Y., Li, Y.: Soft robotic grippers based on particle transmission. IEEE/ASME Trans. Mechatron. 24(3), 969–978 (2019)

    Article  Google Scholar 

  15. Dogar, M.R., Srinivasa, S.S.: A planning framework for non-prehensile manipulation under clutter and uncertainty. Auton. Robot. 33(3), 217–236 (2012)

    Article  Google Scholar 

  16. Clavera, I., Held, D., Abbeel, P.: "Policy transfer via modularity and reward guiding," in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1537–1544: IEEE (2017)

  17. Omrčen, D., Böge, C., Asfour, T., Ude, A., Dillmann, R.: "Autonomous acquisition of pushing actions to support object grasping with a humanoid robot," in 2009 9th IEEE-RAS International Conference on Humanoid Robots, pp. 277–283: IEEE (2009)

  18. ten Pas, A., Gualtieri, M., Saenko, K., Platt, R.: Grasp pose detection in point clouds. Int. J. Robot. Res. 36(13–14), 1455–1473 (2017)

    Google Scholar 

  19. Zeng, A., Song, S., Welker, S., Lee, J., Rodriguez, A., Funkhouser, T.: "Learning synergies between pushing and grasping with self-supervised deep reinforcement learning," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4238–4245: IEEE (2018)

  20. Boularias, A., Bagnell, J.A., Stentz, A.: "Learning to manipulate unknown objects in clutter by reinforcement," in Twenty-Ninth AAAI Conference on Artificial Intelligence (2015)

  21. Manschitz, S.: "Learning Sequential Skills for Robot Manipulation Tasks," Technische Universität (2018)

  22. Danielczuk, M., Mahler, J., Correa, C., Goldberg, K. "Linear push policies to increase grasp access for robot bin picking," in 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), pp. 1249–1256: IEEE (2018)

  23. Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. MIT press (2018)

  24. Sadeghi, F., Levine, S.: "Cad2rl: Real single-image flight without a single real image," arXiv preprint arXiv:1611.04201, (2016)

  25. Mnih, V. et al.: "Human-level control through deep reinforcement learning," nature, vol. 518, no. 7540, pp. 529–533 (2015)

  26. Paszke, A., et al.: "Automatic differentiation in pytorch" (2017)

  27. Yang, Y., Liang, H., Choi, C.: A Deep Learning Approach to Grasping the Invisible. IEEE Robotics and Automation Letters 5(2), 2232–2239 (2020)

    Article  Google Scholar 

  28. Long, J., Shelhamer, E., Darrell, T.: "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440 (2015)

  29. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708 (2017)

  30. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: "Imagenet: A large-scale hierarchical image database," in 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255: IEEE (2009)

  31. Rohmer, E., Singh, S.P., Freese, M.: "V-REP: A versatile and scalable robot simulation framework," in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1321–1326: IEEE (2013)

Download references

Acknowledgements

This research was funded by the NSERC Discovery Program, grant number RGPIN-2017-05762, and the Mitacs Accelerate Program, application number IT14727.

Author information

Authors and Affiliations

Authors

Contributions

All authors made substantial contributions to the conceptional design of the work, implementation, and paper writing.

Corresponding author

Correspondence to Haoxiang Lang.

Ethics declarations

Ethics Approval

Not applicable.

Consent to Participate

Not applicable.

Consent for Publication

Not applicable.

Conflicts of Interest/Competing Interests

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (MP4 145738 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Al-Shanoon, A., Lang, H. Learn to Grasp Unknown-Adjacent Objects for Sequential Robotic Manipulation. J Intell Robot Syst 105, 83 (2022). https://doi.org/10.1007/s10846-022-01702-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-022-01702-4

Keywords

Navigation