Monday, August 12, 2019

Is LIDAR going away for A.I. vision? Elon Musk says yes, others disagree

Cornell researchers published a research paper that is somewhat critical of about lidar. Using nothing but stereo cameras, the computer scientists achieved breakthrough results on KITTI, a popular image recognition benchmark for self-driving systems. Their new technique produced results far superior to previously published camera-only results—and not far behind results that combined camera and lidar data. LiDAR sensors use lasers to create 3D point maps of their surroundings, measuring objects’ distance via the speed of light. Stereo cameras, which rely on two perspectives to establish depth, as human eyes do, seemed promising. But their accuracy in object detection has been woefully low, and the conventional wisdom was that they were too imprecise.

Radar sensors deliver images similar to optical sensors. LiDAR delivers points which measure the distance between the instrument and the target. Cameras plus lidar performed better than cameras alone had nothing to do with the superior accuracy of lidar's distance measurements. Rather, it was because the "native" data format from lidar is easier for machine-learning algorithms to work with.


No comments:

Post a Comment