[1]韩骁枫,陆建峰,李祥瑞,等.一种基于激光雷达传感器的行人检测方法[J].哈尔滨工程大学学报,2019,40(06):1149-1154.[doi:10.11990/jheu.201803119]
 HAN Xiaofeng,LU Jianfeng,LI Xiangrui,et al.Pedestrian detection method based on LIDAR sensors[J].hebgcdxxb,2019,40(06):1149-1154.[doi:10.11990/jheu.201803119]
点击复制

一种基于激光雷达传感器的行人检测方法(/HTML)
分享到:

《哈尔滨工程大学学报》[ISSN:1006-6977/CN:61-1281/TN]

卷:
40
期数:
2019年06期
页码:
1149-1154
栏目:
出版日期:
2019-06-05

文章信息/Info

Title:
Pedestrian detection method based on LIDAR sensors
作者:
韩骁枫 陆建峰 李祥瑞 赵春霞
南京理工大学 计算机科学与工程学院, 江苏 南京 211094
Author(s):
HAN Xiaofeng LU Jianfeng LI Xiangrui ZHAO Chunxia
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 201194, China
关键词:
行人检测激光雷达地面无人车辆环境理解无人驾驶目标检测支持向量机快速点特征直方图
分类号:
O235
DOI:
10.11990/jheu.201803119
文献标志码:
A
摘要:
针对目前基于激光雷达点云的大多数特征不能描述行人目标的形状分布这一问题,本文提出了一种面向地面无人车辆的基于激光雷达传感器的行人检测方法。利用DBSCAN算法所有的非地面激光雷达点云进行聚类,并且提出了一种快速点特征直方图分布特征,用于训练支持向量机分类器进行行人的检测。本文在KITTI OBJECT数据库和一辆地面无人车辆上对方法的正确率和有效性进行了实验,结果表明:验证了本文提出的快速点特征直方图特征相比较于其他的激光雷达特征,可以有效提高行人检测的性能,同时能够满足地面无人车辆对行人检测的实时性要求。

参考文献/References:

[1] DALAL N, TRIGGS B. Histograms of oriented gradients for human detection[C]//Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, CA, USA, 2005:886-893.
[2] HOANG V D, LE M H, JO K H. Hybrid cascade boosting machine using variant scale blocks based HOG features for pedestrian detection[J]. Neurocomputing, 2014, 135:357-366.
[3] LI Bo, LI Ye, TIAN Bin, et al. Part-based pedestrian detection using grammar model and ABM-HoG features[C]//Proceedings of 2013 IEEE International Conference on Vehicular Electronics and Safety. Dongguan, China, 2013:78-83.
[4] CHO H, RYBSKI P E, BAR-HILLEL A, et al. Real-time pedestrian detection with deformable part models[C]//Proceedings of 2012 IEEE Intelligent Vehicles Symposium. Alcala de Henares, Spain, 2012:1035-1042.
[5] PREMEBIDA C, LUDWIG O, NUNES U. LIDAR and vision-based pedestrian detection system[J]. Journal of field robotics, 2009, 26(9):696-711.
[6] HÄSELICH M, JÖBGEN B, WOJKE N, et al. Confidence-based pedestrian tracking in unstructured environments using 3D laser distance measurements[C]//Proceedings of 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. Chicago, IL, USA, 2014:4118-4123.
[7] NAVARRO-SERMENT L E, MERTZ C, HEBERT M. Pedestrian detection and tracking using three-dimensional LADAR data[J]. The international journal of robotics research, 2010, 29(12):1516-1528.
[8] KIDONO K, MIYASAKA T, WATANABE A, et al. Pedestrian recognition using high-definition LIDAR[C]//Proceedings of 2011 IEEE Intelligent Vehicles Symposium. Baden-Baden, Germany, 2011:405-410.
[9] WANG Jun, WU Tao, ZHENG Zhongyang. LIDAR and vision based pedestrian detection and tracking system[C]//Proceedings of 2015 IEEE International Conference on Progress in Informatics and Computing. Nanjing, China, 2015:118-122.
[10] AHTIAINEN J, PEYNOT T, SAARINEN J, et al. Learned ultra-wideband RADAR sensor model for augmented LIDAR-based traversability mapping in vegetated environments[C]//Proceedings of Proceedings of International Conference on Information Fusion. Washington, DC, USA, 2015.
[11] SUGER B, STEDER B, BURGARD W. Traversability analysis for mobile robots in outdoor environments:a semi-supervised learning approach based on 3D-lidar data[C]//Proceedings of 2015 IEEE International Conference on Robotics and Automation. Seattle, WA, USA, 2015:3941-3946.
[12] 毕方明, 王为奎, 陈龙. 基于空间密度的群以噪声发现聚类算法研究[J]. 南京大学学报(自然科学), 2012, 48(4):491-498.BI Fangming, WANG Weikui, CHEN Long. DBSCAN:density-based spatial clustering of applications with noise[J]. Journal of Nanjing University (natural sciences), 2012, 48(4):491-498.
[13] JOHNSON A E, HEBERT M. Using spin images for efficient object recognition in cluttered 3D scenes[J]. IEEE transactions on pattern analysis and machine intelligence, 1999, 21(5):433-449.
[14] RUSU R B, BLODOW N, MARTON Z C, et al. Aligning point cloud views using persistent feature histograms[C]//Proceedings of 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems. Nice, France, 2008:3384-3391.
[15] RUSU R B, BLODOW N, BEETZ M. Fast Point Feature Histograms (FPFH) for 3D registration[C]//Proceedings of 2009 IEEE International Conference on Robotics and Automation. Kobe, Japan, 2009:3212-3217.
[16] GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA, 2012:3354-3361.

备注/Memo

备注/Memo:
收稿日期:2018-03-31。
基金项目:国家自然科学基金项目(61233011,91220301);国家科学与技术重大专项(2015ZX01041101); 111工程基金项目(B13022);江苏省科学与技术社会发展支持项目(BE2014714);社会安全图像与视频理解江苏省重点实验室项目(南京理工大学)(30920140122007).
作者简介:韩骁枫,男,博士研究生;陆建峰,男,教授,博士生导师.
通讯作者:陆建峰,E-mail:lujf@njust.edu.cn.
更新日期/Last Update: 2019-06-03