Computer Integrated Manufacturing System ›› 2022, Vol. 28 ›› Issue (9): 2865-2880.DOI: 10.13196/j.cims.2022.09.018

Previous Articles     Next Articles

Weld initial vector detection and robot pose estimation based on improved CenterNet

TANG Xi1,2,YAO Xifan1+,DONG Yi2,ZHANG Junming1   

  1. 1.School of Mechanical and Automotive Engineering,South China University of Technology
    2.School of Mechanical Engineering,Guangzhou City College of Technology
  • Online:2022-09-30 Published:2022-10-13
  • Supported by:
    Project supported by the Guangdong Provincial Basic and Applied Basic Research Foundation,China(No.2021A1515010506),and the China Scholarship Council,China(No.[2020]1509).

基于改进CenterNet的焊缝起始向量检测与机器人位姿估计方法

唐溪1,2,姚锡凡1+,董艺2,张峻铭1   

  1. 1.华南理工大学机械与汽车工程学院
    2.广州城市理工学院 机械工程学院
  • 基金资助:
    广东省基础与应用基础基金资助项目(2021A1515010506);国家留学基金管理委员会资助项目(留金美[2020]1509号)。

Abstract: To solve the problem that the traditional methods were difficult to obtain the initial weld position and pose of robot,a novel method was proposed to estimate the initial weld position and pose of robot.Feature fusion and the Convolutional Block Attention Module (CBAM) were used to improve CenterNet to extract features effectively.The initial weld vector was obtained by the improved CenterNet,and the edge condition selection algorithm was used to extract the edge line of the workpiece.The initial vector and some corresponding points on the edge were obtained by epipolar constraint matching.Then the initial vectors and some corresponding points on the edge line were obtained by epipolar constraint matching,and the 3D information of the feature points on the workpiece surface was obtained.Experimental results showed that the improved CenterNetout was superior to other algorithms in detecting the initial weld seam vector,and the error of workpiece attitude estimation satisfied the requirements of robustness and precision.

Key words: weld initial vector detection, binocular stereo vision, CenterNet, feature fusion, pose estimation

摘要: 针对传统方法在焊缝起始点检测任务中适应性差,难以获取机器人初始焊接位姿的问题,提出一种基于改进CenterNet的焊缝起始向量检测与机器人位姿估计方法。首先,采用特征融合与卷积块注意力机制(CBAM)增强CenterNet提取有效特征的能力。然后,利用改进的CenterNet获取焊缝起始向量,并用提出的工件底板边线条件筛选算法提取底板边线,继而通过极线约束匹配得到起始向量和底板边线上的若干对应点,由此得到工件表面特征点三维信息,实现工件姿态和焊接位姿估计。实验结果表明,改进后的CenterNet在焊缝起始向量检测任务中,检测精度和检测速度优于其他对比算法,工件姿态估计误差满足机器人初始焊接位姿引导的精度和鲁棒性要求。

关键词: 焊缝起始向量检测, 双目立体视觉, CenterNet网络, 特征融合, 姿态估计

CLC Number: