• 销售与服务热线
  • 180 8647 3890

激光雷达VS纯视觉

2025-06-03

  激光雷达VS纯视觉,小米小鹏隔空交锋

  2025年,一场关于自动驾驶感知路径的激烈争论在汽车行业骤然升温。短短数日,被理解为“查缺补漏”的小米YU7官宣全系标配激光雷达、4D毫米波雷达;另一边,小鹏汽车董事长何小鹏却再度为纯视觉方案振臂高呼,并给出明确时间表:“最晚约一年半以后,或者2027年初,纯视觉方案将成为行业共识。”更引人注目的是,小鹏即将推出的G7纯电SUV,据传将采用纯视觉组合辅助驾驶方案。

小鹏MONA M03 Max;图片来源:小鹏汽车

 

 一场关于多传感器融合与纯视觉方案的争论在2025年再次进入白热化。

  激光雷达看得远看得清是个伪命题?

  组合辅助驾驶技术在今年的讨论峰值,莫过于3月29日深夜发生的那期事故持续发酵。尽管截至目前无论是官方还是小米汽车都未曾公布更多细节,但对于事故的发生,互联网上掀起了汹涌的舆论浪潮。

  公众的质疑声此起彼伏:有人追问车辆安全设计缺陷——“为何碰撞后会发生如此剧烈的爆燃?”“车门是否因变形而无法开启,导致救援希望破灭?”;也有人将矛头指向驾驶员的操作。但最核心的质疑始终围绕着一个问题:小米SU7(配置|询价)发布会上宣称的“全系标配智能辅助驾驶”“高速NOA上市即交付”,尤其是在高速领航(NOA)所具备的功能之一便是“施工避让”,为何在关键时刻未能避免悲剧的发生?

图片来源:小米SU7上市发布会直播截图图片来源:小米SU7上市发布会直播截图

  面对持续发酵的舆情,小米官方在事件曝光后分两次作出了正式回应,但似乎并未平息大家激昂情绪。

  尤其是在查阅小米SU7的产品资料后发现,小米汽车智驾系统分为Xiaomi Pilot Pro和Xiaomi Pilot Max两版智能驾驶系统。事故车辆搭载便是Xiaomi Pilot Pro系统,该系统并未配备激光雷达这一关键硬件。这一事实曝光后,在相关事故讨论中开始出现“没有激光雷达的标准版算不上真正的‘智驾’”等观点。

  对此,一位从事激光雷达相关技术领域专家向盖世汽车介绍,纯视觉方案在摄像头“看不见”或“看不清”的情况下存在局限性,如强光照射、夜晚弱光环境、前景物体与背景颜色相同导致无法区分等情况,确实存在障碍物识别不够及时、到位的问题。这正是业内普遍将激光雷达视为纯视觉方案必备安全冗余件的重要原因——它能在视觉系统失效时提供关键的补充感知能力。

  但这场技术路线之争在5月迎来戏剧性转折。小鹏汽车自动驾驶产品高级总监袁婷婷公开挑战行业共识,直言“激光雷达看得远是个伪命题”。这场技术论战从三个维度展开:

  能量衰减与点云密度瓶颈:激光雷达依赖发射近红外光并计算反射回波时间(ToF)来定位障碍物,但这一原理导致其能量密度随距离呈平方反比衰减。以行业领先的192线激光雷达为例,在200米外探测时,其回波信号强度和点云密度仅为近距离探测的千分之一,导致对轻质物体(如塑料袋)与危险目标(如横穿电瓶车)的区分能力大幅下降。相比之下,800万像素摄像头在相同距离下仍能捕捉到丰富的纹理、颜色等语义信息,为算法决策提供更可靠的依据。

  多径效应与低帧率加剧误判风险:激光雷达在复杂场景中易发生多次反射,导致回波信号混叠。例如,城市立交桥结构曾引发某车型将桥墩阴影误判为静止车辆,导致十余次非必要急刹。此外,主流激光雷达的10Hz刷新率仅为摄像头帧率的五分之一,在120公里/小时车速下,200米外的移动目标物会在两次扫描间隔中位移超3米,进一步降低动态目标识别精度。

  极端天气下的“致盲”困境:激光雷达对雨雾等天气高度敏感。实测数据显示,暴雨环境下其有效探测距离骤降至30米以内,且近场噪点增加五倍。而毫米波雷达凭借其长波长特性,在穿透能力上展现出独特优势。今年广东汛期路测中,纯视觉方案车辆在能见度50米工况下的识别准确率反而比融合感知方案高出12%,凸显了单一激光雷达方案的局限性。

  争议在5月28日达到沸点。微博认证为“小鹏汽车品牌公关负责人”的账号“XP-阿莱克氏Alex”甚至直接发布了一条极具争议的微博,直指”全球新能源汽车已经正式进入中场,“大算力+大模型”才能真正定义一台AI汽车的智能能力上限。不要过于迷恋单一某个传感器……”其用词间颇具讽刺意味的嘲讽,引发业内深度探讨。

  纯视觉方案,为高阶智驾卸包袱?

  长期以来,全球智能驾驶领域呈现鲜明的技术分野:特斯拉凭借其FSD纯视觉方案构建起技术护城河,而国内市场则由极越汽车等少数派坚守视觉感知阵地,与主流激光雷达路线形成鲜明对比。

  特斯拉掌门人埃隆·马斯克曾多次在技术论战中抨击激光雷达,其高昂成本与有限性能无法匹配自动驾驶的终极需求。相较之下,特斯拉构建的以8颗摄像头为核心的BEV+Transformer感知架构,配合Dojo超算中心的算法迭代,已形成数据驱动的闭环体系。这种技术路径选择在国内市场遭遇长期争议——小鹏、蔚来等头部玩家此前均将激光雷达作为高阶智驾的标配,广汽、长安等传统车企更将多传感器融合方案视为安全冗余的核心保障。

  不过这一趋势却在2024发生了反转。

  盖世汽车研究院统计的信息显示,行业正迎来纯视觉方案的集中爆发期:华为ADS SE基础版已实现多车型搭载,大疆车载战略升级后的卓驭科技凭成行平台切入主流市场,蔚来乐道品牌更是推出全系视觉方案产品矩阵。特别值得关注的是,小鹏汽车完成技术路线重大转向——在P7+车型上率先落地去激光雷达的AI鹰眼视觉方案,标志着昔日激光雷达最坚定的支持者正式“倒戈”。

  目前,华为的纯视觉智驾方案ADS SE基础版,已经率先在新款问界M7 PRO、深蓝L07上搭载,后续还将在深蓝S07、智界S7 PRO等车型上陆续搭载。蔚来乐道的纯视觉路线也已经应用在乐道L60上,卓驭的成行平台上则是“走下来”了宝骏云朵和宝骏云海(配置|询价)等车型,其中宝骏云海已经初步具备端到端功能。

  作为智驾第一梯队的小鹏汽车,自2021年推出搭载激光雷达的P5车型后,此后推出的高配版车型基本都有搭载激光雷达。不过,去年小鹏汽车却首当其冲的抛弃了激光雷达,推出了不带激光雷达的高阶智驾车型小鹏P7(配置|询价)+。

  小鹏汽车董事长何小鹏表示:“端到端大模型上车之后,系统通过摄像头获取的视频信息显著增加了。” 而现在低精度的激光雷达,效果远远比不上高精度的摄像头。

  小鹏车端大模型2025年目标实现百公里接管1次,努力在18个月内实现类L3+智驾体验。何小鹏认为,这通过视觉方案就能实现。

  除了小鹏汽车外,像蔚来的乐道品牌、上汽通用五菱的宝骏品牌等也都纷纷拥抱纯视觉方案。在盖世汽车研究院看来,究其原因,大模型+端到端算法技术的持续创新是核心驱动力,此外,越来越“卷”的市场现状,也在倒逼大家不得不选择成本更低的智驾方案。

  盖世汽车研究院指出,纯视觉方案的核心优势是传感器依赖度低,较之重激光雷达+毫米波雷达的方案,整体成本低很多,以更好的性价比实现智驾功能。但纯视觉方案的劣势其实也不小,这套方案严重依赖算法的持续迭代和海量高质量数据的持续驱动,同时对超算中心算力和基础设施的要求也非常高,这些都是不菲的隐形成本。

  不过,激光雷达的“失宠”并非仅因价格单一因素。技术的成熟度、供应链的稳定性、以及与现有智能驾驶系统的兼容性等问题,同样影响了车企的选择。

  随着端到端方案的研发深入,纯视觉方案在特定场景下的表现,如识别障碍物、理解交通环境等方面,也在一定程度上证明了其可行性,这也使得越来越多的车企开始重新评估激光雷达的必要性。

  辅助驾驶方案,不该非此即彼

  在提出“激光雷达看得远是个伪命题”观点的同时,袁婷婷进一步阐释,小鹏汽车向纯视觉感知架构的演进绝非技术倒退,而是基于数据闭环构建与算法范式突破的必然选择,其技术决策逻辑折射出自动驾驶产业正在经历的深层变革。

  在感知精度层面,袁婷婷以实际场景数据论证:配备800万像素摄像头的视觉系统在200米距离仍可保持0.1°角分辨率,能清晰辨识塑料袋与电瓶车的纹理特征差异。相较之下,激光雷达受限于扫描线束密度,同等距离下点云稀疏度导致目标物轮廓还原能力存在数量级差距。

  算法架构层面,小鹏汽车通过端到端大模型实现技术突破。其AI鹰眼视觉方案已摆脱传统多传感器融合框架,通过4D标注数据训练形成场景理解能力。袁婷婷强调,模型对多模态数据的融合处理效率,远比单一传感器性能参数更具决定性作用。

  商业化落地方面,技术路线选择显现出显著成本差异。单颗激光雷达约占整车物料成本(BOM)的2%,而8摄像头视觉方案硬件成本可降低37%。更关键的是,取消激光雷达使整车减重8公斤,续航里程提升5.2%,这种工程化平衡直接转化为消费者可感知的产品力。

  尽管袁婷婷的质疑直指激光雷达痛点,但行业对传感器融合的共识并未动摇。激光雷达在夜间探测、低矮障碍物识别(如井盖、石块)及异形目标检测(如马车)中的优势仍不可替代。例如,Waymo测试显示,激光雷达在识别异形障碍物时比纯视觉方案快0.3秒,显著降低城市道路误刹车率。

  业内专家指出,自动驾驶的终极方案并非“非此即彼”,而是需要构建“摄像头为主、多传感器冗余”的感知体系。例如,毫米波雷达可弥补激光雷达在雨雾天气的短板,而摄像头则负责提供语义信息。这种“主次分明、优势互补”的融合策略,才是技术演进的正确方向。

  技术路线博弈的深层影响已渗透至消费端认知。还记得今年清明假期,就在小米SU7高速事件过去数日后,多地高速管理部门将安全提示从“注意雨天路滑”调整为“慎用智驾”取而代之。更有甚者,从“请勿过于依赖辅助驾驶”,到“慎用辅助驾驶”,再到“勿用辅助驾驶”,安徽高速三改智驾标语。这种用户信任危机倒逼行业重新审视技术宣传边界。

  或许,正如何小鹏所预测,2027年纯视觉会在中低端市场形成主流;长期看,“融合感知”与“纯视觉”将分食不同场景——后者代表效率与进化,前者代表极致与可靠。但这场技术路线之争的本质,是工程化落地与商业化平衡的终极博弈,而最终裁决者终将是用钱包投票的消费者。


Lidar vs Pure Vision


Lidar vs pure vision, Xiaomi Xiaopeng engages in aerial combat

In 2025, a fierce debate about the perception path of autonomous driving will suddenly heat up in the automotive industry. In just a few days, Xiaomi YU7, which is understood as "checking for deficiencies and filling gaps", officially announced that the entire series comes standard with laser radar and 4D millimeter wave radar; On the other hand, He Xiaopeng, the chairman of Xiaopeng Motors, once again shouted for a pure visual solution and gave a clear timetable: "At the latest in about a year and a half, or early 2027, a pure visual solution will become the industry consensus." What is even more remarkable is that Xiaopeng's upcoming G7 pure electric SUV is rumored to adopt a pure visual combination assisted driving solution.


The debate over multi-sensor fusion and pure vision solutions will once again enter a white hot stage in 2025.

Is it a false proposition that LiDAR can see far and clearly?

The peak of discussions on combination assisted driving technology this year is undoubtedly the ongoing escalation of the accident that occurred late at night on March 29th. Although neither the official nor Xiaomi Automobile has released more details so far, there has been a surge of public opinion on the Internet about the accident.

The public's doubts have been raised one after another: some people are questioning the safety design flaws of vehicles - "Why did such a violent explosion occur after a collision?" "Is the door unable to open due to deformation, leading to the collapse of rescue hopes; Some people also point the finger at the driver's operation. But the core question always revolves around one question: why did the Xiaomi SU7 (configuration | inquiry) claim at the press conference that "intelligent assisted driving is standard across the range" and "high-speed NOA will be delivered upon launch", especially when one of the functions of the high-speed navigation (NOA) is "construction avoidance", fail to avoid tragedy at a critical moment?


 

Especially after reviewing the product information of Xiaomi SU7, it was found that Xiaomi's intelligent driving system for cars is divided into two versions: Xiaomi Pilot Pro and Xiaomi Pilot Max. The accident vehicle was equipped with the Xiaomi Pilot Pro system, which was not equipped with the key hardware of LiDAR. After this fact was exposed, opinions such as "the standard version without LiDAR cannot be considered true 'smart driving'" began to emerge in discussions about related accidents.

Regarding this, an expert in the field of laser radar related technology introduced to GAC Motor that pure visual solutions have limitations in situations where the camera is "invisible" or "not clear", such as strong light exposure, low light environments at night, and the inability to distinguish foreground objects with the same background color. There is indeed a problem of insufficient timely and accurate obstacle recognition. This is precisely why the industry generally regards LiDAR as a necessary safety redundancy component for pure visual solutions - it can provide critical supplementary perception capabilities in the event of visual system failure.

But this technological battle took a dramatic turn in May. Yuan Tingting, Senior Director of Autonomous Driving Products at Xiaopeng Motors, publicly challenged the industry consensus and bluntly stated that "LiDAR's ability to see far is a false proposition". This technological debate unfolds from three dimensions:

Energy attenuation and point cloud density bottleneck: Lidar relies on emitting near-infrared light and calculating the reflection echo time (ToF) to locate obstacles, but this principle leads to an inverse square decay of its energy density with distance. Taking the industry-leading 192 line LiDAR as an example, when detected from 200 meters away, its echo signal strength and point cloud density are only one thousandth of those detected at close range, resulting in a significant decrease in the ability to distinguish between lightweight objects (such as plastic bags) and dangerous targets (such as crossing electric scooters). In contrast, an 8-megapixel camera can still capture rich semantic information such as texture and color at the same distance, providing more reliable basis for algorithm decision-making.

Multipath effects and low frame rates exacerbate the risk of misjudgment: Lidar is prone to multiple reflections in complex scenes, leading to aliasing of echo signals. For example, the structure of an urban overpass once caused a certain vehicle model to mistake the shadow of the bridge pier for a stationary vehicle, resulting in more than ten unnecessary emergency braking. In addition, the 10Hz refresh rate of mainstream LiDAR is only one-fifth of the camera frame rate. At a speed of 120 kilometers per hour, moving targets 200 meters away will shift by more than 3 meters between two scans, further reducing the accuracy of dynamic target recognition.

The dilemma of "blindness" in extreme weather: Lidar is highly sensitive to weather conditions such as rain and fog. The measured data shows that the effective detection range in the rainstorm environment drops sharply to within 30 meters, and the near-field noise points increase five times. Millimeter wave radar, with its long wavelength characteristics, exhibits unique advantages in penetration capability. In this year's flood season road test in Guangdong, the recognition accuracy of vehicles using a pure visual solution was 12% higher than that of a fused sensing solution under a visibility of 50 meters, highlighting the limitations of a single LiDAR solution.

The controversy reached a boiling point on May 28th. The account "XP Alex", certified as the "Xiaopeng Automobile Brand Public Relations Manager" on Weibo, even posted a highly controversial Weibo directly, pointing out that "global new energy vehicles have officially entered the midfield, and" big computing power+big models "can truly define the upper limit of the intelligence ability of an AI car. Don't be too obsessed with a single sensor... "The ironic mockery in its wording has sparked in-depth discussions within the industry.

A purely visual solution to relieve the burden of advanced intelligent driving?

For a long time, there has been a clear technological divide in the global intelligent driving field: Tesla has built a technological moat with its FSD pure visual solution, while the domestic market is dominated by a minority of visual perception players such as Geely, forming a sharp contrast with the mainstream LiDAR route.

Tesla leader Elon Musk has repeatedly criticized LiDAR in technology debates, stating that its high cost and limited performance cannot match the ultimate needs of autonomous driving. In contrast, Tesla's BEV+Transformer perception architecture with 8 cameras as the core, combined with algorithm iteration from Dojo Supercomputing Center, has formed a data-driven closed-loop system. This technological path choice has been controversial in the domestic market for a long time - leading players such as Xiaopeng and NIO have previously made LiDAR a standard configuration for advanced intelligent driving, while traditional car companies such as GAC and Changan consider multi-sensor fusion solutions as the core guarantee of safety redundancy.

However, this trend reversed in 2024.

According to statistics from the Gaishi Automotive Research Institute, the industry is entering a period of concentrated explosion in pure visual solutions: Huawei ADS SE Basic Edition has achieved multi vehicle installation, DJI's upgraded in car strategy, Zhuoyu Technology, has entered the mainstream market through the Xingxing platform, and NIO's Ledao brand has launched a full range of visual solution product matrices. Of particular note is that Xiaopeng Motors has completed a significant shift in its technological roadmap - the AI Eagle Eye vision solution that first implemented LiDAR on the P7+model, marking the official "defection" of the most steadfast supporters of LiDAR in the past.

At present, Huawei's pure visual intelligent driving solution ADS SE Basic Edition has been first installed on the new Wenjie M7 PRO and Shenlan L07, and will be gradually installed on Shenlan S07, Zhijie S7 PRO and other models in the future. The pure visual route of NIO Ledao has also been applied to Ledao L60, while on Zhuoyu's Xingxing platform, models such as Baojun Cloud and Baojun Cloud Sea (configuration | inquiry) have been "taken down", among which Baojun Cloud Sea has already begun to have end-to-end functionality.

As the top tier of intelligent driving, Xiaopeng Motors has been equipped with laser radar in most of its high-end models since the launch of the P5 model in 2021. However, last year Xiaopeng Motors was the first to abandon LiDAR and launched the high-end intelligent driving model Xiaopeng P7 (configuration | inquiry)+without LiDAR.

Xiaopeng Motors Chairman He Xiaopeng said, "After the end-to-end large model is loaded onto the car, the video information obtained by the system through the camera has significantly increased. However, the current low precision LiDAR is far less effective than high-precision cameras.

By 2025, Xiaopeng aims to achieve a 100 kilometer takeover of its large vehicle model and strive to achieve a L3+intelligent driving experience within 18 months. He Xiaopeng believes that this can be achieved through visual solutions.

In addition to Xiaopeng Motors, brands such as NIO's Ledo and SAIC GM Wuling's Baojun have also embraced pure visual solutions. In the view of Gaishi Automotive Research Institute, the continuous innovation of large models and end-to-end algorithm technology is the core driving force. In addition, the increasingly "saturated" market situation is forcing people to choose lower cost intelligent driving solutions.

Gaishi Automotive Research Institute pointed out that the core advantage of pure visual solutions is low sensor dependence. Compared with the solution of heavy laser radar+millimeter wave radar, the overall cost is much lower, achieving intelligent driving functions with better cost-effectiveness. But the disadvantages of pure visual solutions are actually not small. This solution heavily relies on continuous algorithm iteration and the continuous drive of massive high-quality data. At the same time, it also requires high computing power and infrastructure from the supercomputing center, which are significant hidden costs.

However, the fall from favor of LiDAR is not solely due to price alone. The maturity of technology, stability of supply chain, and compatibility with existing intelligent driving systems also affect the choices of car companies.

With the deepening of end-to-end solution development, the performance of pure visual solutions in specific scenarios, such as identifying obstacles and understanding traffic environments, has also proven its feasibility to a certain extent, which has led more and more car companies to re evaluate the necessity of LiDAR.

Assisted driving solutions should not be either or

While putting forward the viewpoint that "LiDAR can see far is a false proposition", Yuan Tingting further elaborated that the evolution of Xiaopeng Motors towards a pure visual perception architecture is not a technological regression, but an inevitable choice based on data closed-loop construction and algorithm paradigm breakthroughs. Its technological decision-making logic reflects the deep transformation that the autonomous driving industry is undergoing.

In terms of perceptual accuracy, Yuan Tingting demonstrated with actual scene data that a vision system equipped with an 8-megapixel camera can still maintain a 0.1 ° angular resolution at a distance of 200 meters, and can clearly identify the texture differences between plastic bags and electric scooters. In contrast, LiDAR is limited by the scanning beam density, and the sparsity of point clouds at the same distance results in an order of magnitude difference in the ability to reconstruct the contour of the target object.

At the level of algorithm architecture, Xiaopeng Motors achieves technological breakthroughs through end-to-end big models. Its AI eagle eye vision solution has moved away from traditional multi-sensor fusion frameworks and developed scene understanding capabilities through 4D annotated data training. Yuan Tingting emphasized that the efficiency of model fusion processing for multimodal data is far more decisive than the performance parameters of a single sensor.

In terms of commercialization, there is a significant cost difference in the selection of technology routes. A single LiDAR accounts for about 2% of the total vehicle material cost (BOM), while the hardware cost of the 8-camera vision solution can be reduced by 37%. More importantly, the cancellation of LiDAR reduced the weight of the entire vehicle by 8 kilograms and increased the range by 5.2%. This engineering balance directly translates into product power that consumers can perceive.

Although Yuan Tingting's doubts point directly at the pain points of LiDAR, the industry's consensus on sensor fusion has not wavered. The advantages of LiDAR in night detection, recognition of low obstacles (such as manhole covers and rocks), and detection of irregular targets (such as carriages) are still irreplaceable. For example, Waymo tests have shown that LiDAR can recognize irregular obstacles 0.3 seconds faster than pure visual solutions, significantly reducing the rate of false braking on urban roads.

Industry experts point out that the ultimate solution for autonomous driving is not "either or", but rather requires the construction of a perception system with "cameras as the main component and multiple redundant sensors". For example, millimeter wave radar can compensate for the shortcomings of lidar in rainy and foggy weather, while cameras are responsible for providing semantic information. This fusion strategy of "clear priorities and complementary advantages" is the correct direction for technological evolution.

The deep impact of technological route games has penetrated into consumer cognition. Do you still remember during this year's Qingming Festival holiday, just a few days after the Xiaomi SU7 highway incident, many highway management departments in various places adjusted the safety warning from "Be careful of slippery roads on rainy days" to "Use smart driving with caution" instead. Moreover, from "Do not rely too much on assisted driving" to "Use assisted driving with caution", and then to "Do not use assisted driving", the Anhui Expressway has undergone three changes to smart driving slogans. This crisis of user trust has forced the industry to re-examine the boundaries of technology promotion.

Perhaps, as predicted by Xiaopeng, pure vision will become mainstream in the mid to low end market by 2027; In the long run, "fusion perception" and "pure vision" will divide different scenarios - the latter represents efficiency and evolution, while the former represents ultimate and reliability. But the essence of this technological route dispute is the ultimate game of balancing engineering implementation and commercialization, and the ultimate arbiter will ultimately be consumers who vote with their wallets.