Anomaly detection of vehicles requires accurate vehicles’ positions for monitoring the movement of others and predicting other vehicles behaviors. To propose the exact location of surrounding vehicles, this paper explores the ways to accurately estimate the lateral distance between mid-low point on the rear of the vehicle and adjacent lane by detecting lane markers and objects with orientation information. Both are strongly dependent on advantages of Camera and LiDAR sensors and rely on Deep Neural Network.
To estimate vertical distance of other vehicles about lanes, we detect the lanes and objects with image-based and point cloud-based deep learning model respectively. Additionally, we focus on projection process which put 3D point cloud on image domain to merge the detection results from different dimensions. By putting matching process on projection step, it is able to find 3D lanes corresponding to 2D lane detection results, so getting the lateral distance between target points on vehicles and lanes is possible with derivative relation.
In order to present the estimation performance of our method, this paper considers errors in the rear center point of the 3D objection box and the center point on the actual 3D map, and considers errors occurring in mapping lanes, detected in 2D, to 3D.
Anomaly detection of
vehicles
requires accurate
vehicles’
positions for monitoring the movement of others and predicting other
vehicles
behaviors. To propose the exact location of surrounding
vehicles
, this paper explores the ways to
accurately
estimate the lateral distance between mid-low
point
on the rear of the
vehicle
and adjacent
lane
by detecting
lane
markers and objects with orientation information. Both are
strongly
dependent on advantages of Camera and
LiDAR
sensors and rely on Deep Neural Network.
To estimate vertical distance of other
vehicles
about
lanes
, we detect the
lanes
and objects with image-based and
point
cloud-based deep learning model
respectively
.
Additionally
, we focus on projection process which put 3D
point
cloud on image domain to merge the detection results from
different
dimensions. By putting matching process on projection step, it is able to find 3D
lanes
corresponding to 2D
lane
detection results,
so
getting the lateral distance between target
points
on
vehicles
and
lanes
is possible with derivative relation.
In order to present the estimation performance of our method, this paper considers errors in the rear center
point
of the 3D objection box and the center
point
on the actual 3D map, and considers errors occurring in mapping
lanes
, detected in 2D, to 3D.